当前位置:文档之家› Illumination invariance and object model in content-based image and video retrieval

Illumination invariance and object model in content-based image and video retrieval

Illumination invariance and object model in content-based image and video retrieval
Illumination invariance and object model in content-based image and video retrieval

Journal of Visual Communication and Image Representation10,219–244(1999)

Article ID jvci.1998.0403,available online at https://www.doczj.com/doc/7712082754.html, on

Illumination Invariance and Object Model in

Content-Based Image and Video Retrieval

Ze-Nian Li,?Osmar R.Za¨?ane,and Zinovi Tauber

School of Computing Science,Simon Fraser University,Burnaby,British Columbia,Canada V5A1S6

E-mail:li@cs.sfu.ca,zaiane@cs.sfu.ca,zinovi@cs.sfu.ca

Received October2,1997;accepted October30,1998

With huge amounts of multimedia information connected to the global information

network(Internet),ef?cient and effective image retrieval from large image and video

repositories has become an imminent research issue.This article presents our research

in the C-BIRD(content-based image retrieval in digital-libraries)project.In addition

to the use of common features such as color,texture,shape,and their conjuncts,

and the combined content-based and description-based techniques,it is shown that

(a)color-channel-normalization enables search by illumination invariance,and(b)

feature localization and a three-step matching algorithm(color hypothesis,texture

support,shape veri?cation)facilitate search by object model in image and video

databases.C 1999Academic Press

Key Words:color;content-based retrieval;digital library;feature localization;gen-

eralized hough transform;image and video databases;modeling and matching;seg-

mentation;shape;texture.

1.INTRODUCTION

Image and video indexing and retrieval has lately drawn the attention of many researchers in the computer science community[1–9].With the advent of the World-Wide Web(WWW), research in computer vision,arti?cial intelligence,databases,etc.is taken to a larger scale to address the problem of information retrieval from large repositories of images.Previous pattern recognition and image analysis algorithms in the vision and AI?elds dealt with small sets of still images and did not scale.With large collections of images and video frames,scalability,speed,and ef?ciency are the essence for the success of an image and video retrieval system.

There are two main families of image and video indexing and retrieval systems:those based on the content of the images(content-based)like color,texture,shape,objects,etc.,and those based on the description of the images(description-based)like keywords,size,caption, This article was originally scheduled to be part of the special issue on Image Technology for World Wide Web Applications.

?Corresponding author.

219

1047-3203/99$30.00

Copyright C 1999by Academic Press

All rights of reproduction in any form reserved.

220LI,ZA¨IANE,AND TAUBER

etc.While description-based image retrieval systems are relatively easier to implement and to design user interface for,they suffer from the same problems as the information retrieval systems in text databases or Web search engines.It has been demonstrated that search engines using current methods based on simple keyword matching perform poorly.The precision1of these search engines is very low and their recall2is inadequate.It has been demonstrated in[10]that any single web search engine provides the user with only15–42% of the relevant documents.Furnas et al.[11]show that due to widespread synonymy and polysemy in natural languages,indexing methods based on the occurrence of single words do not perform adequately.

Content-based image retrieval systems use visual features to index images.These systems differ mainly in the way they extract the visual features and index images and the way they are queried.Some systems are queried by providing an image sample.These systems search for similar images in the database by comparing the feature vector(or signature)extracted from the sample with the available feature vectors.The image retrieval system Image Surfer,provided by Yahoo,for example,is based on this type of search.Other systems are queried by specifying or sketching image features like color,shape,or texture which are translated into a feature vector to be matched with the known feature vectors in the database.QBIC[2]and WebSeek[7],for example,provide both sample-based queries and the image feature speci?cation queries.WebSeer[12]on the other hand,combines image descriptions,like keywords,and image content,like specifying the number of faces in an image,and uses image content to distinguish between photographs and?gures.However,the visual features extracted are very limited.Another major difference between image retrieval systems is in the domain they index.While Virage[4]indexes solely image databases,using COIR(content-oriented image retrieval)[13],AMORE(advanced multimedia oriented retrieval engine)[9]indexes images from the World-Wide Web.Content-based systems give a relatively satisfactory result with regard to the visual clues;however,their precision and recall are still not optimized.

Effective strategies for image retrieval will bene?t from exploiting both content-based and description-based retrieval techniques and will combine conjunctions and disjunctions of image features and descriptions in the queries,as well as providing users with adequate and ef?cient user interfaces for both querying and browsing.

We have been developing the C-BIRD(content-based image retrieval in digital-libraries) system,which combines automatically generated keywords and visual descriptors like color,texture,shape,and feature localization,to index images and videos in the World-Wide Web.This paper presents our new results of Search by Illumination invariance, Search by Object Model and the discussion of feature localization versus image seg-mentation.

Swain and Ballard’s work on color object recognition by means of a fast matching of color histograms[14]began an interest in the use of simple color-based features for image and video database retrieval.In this method,a database of coarse histograms indexed by three color values is built up.A very simple and fast histogram matching strategy can often identify the correct match for a new image,or a near-match,by using an L1metric of histogram differences[14].It was soon realized that,along with confounding factors such as object pose,noise,occlusion,shadows,clutter,specularities,and pixel saturation, 1Precision—the ratio of relevant documents to the total number of documents retrieved.

2Recall—the percentage of relevant documents among all possible relevant documents.

IMAGE AND VIDEO RETRIEV AL221 a major problem arose because of the effect of changing illumination on images of color objects[15].After all,it would be quite natural for an object in a database to be presented as it appears imaged under some other lighting.

Several color object recognition schemes exist that purport to take illumination change into account in an invariant fashion.In[16]we address the problem of illumination change by extending the original Swain and Ballard method to include illumination invariance in a natural and simpler way than heretofore.First,it is argued that a normalization on each color channel of the images is really all that is required to deal properly with illumination invariance.Second,with an aim of reducing the dimensionality of the feature space involved, a full-color(3D)representation is replaced by2D chromaticity histograms.It is shown that the essential illumination-invariant color information is maintained across this data reduction.The normalization step has the effect of undoing a changing-illumination induced shift in pixel color-space position and in chromaticity space.

Histograms in chromaticity space are indexed by two values and are treated as though they were images.In order to greatly improve the ef?ciency in searching large image and video databases,the chromaticity histogram-images are compressed and then indexed into a database with a small feature vector based on the compressed histogram.In the current implementation the chromaticity histogram-image is?rst reduced by using a wavelet scaling function.Then,the discrete cosine transform(DCT)is applied,followed with a zonal coding [17]of the DCT image.This results in an effective low-pass?ltering of the chromaticity histogram.The resulting indexing scheme is very ef?cient in that it uses a DCT-chromaticity vector of only36values.We have also applied this technique to video segmentation[18]. Since it is much more frequent that illumination changes due to camera motions and object motions in video,our color-channel-normalization method is shown to clearly outperform other cut-detection methods that only rely on color histograms.

Most existing techniques for content-based image retrieval rely on global image features such as color and texture.These global methods are based on simple statistics extracted from the entire images.They are easy to obtain and to store in a database,and their matching takes little time.Inevitably,the global methods lack the power of locating speci?c objects and identifying their details(size,position,orientation,etc.).Some extensions to the global method include search by color layout[2],by sketch[6,9],and by color regions according to their spatial arrangements[7,8].

It has long been argued that segmentation plays an important role in human vision[19]. Ever since the1960s and1970s,image segmentation has been one of the most persistent research areas in computer vision.It is hoped that recognition would become much easier after segmentation.However,it is well known that a good image segmentation is often impossible to https://www.doczj.com/doc/7712082754.html,monly,images are either“over-segmented”or“under-segmented.”As Marr[20]pointed out,“the dif?culties in trying to formulate what should be recovered as a region from an image are so great as to amount almost to philosophical problems.”In[21]it is argued that,like many vision problems,the problem of image segmentation is “ill-de?ned.”

Realizing the inadequacy of global features and methods such as the color histogram (or texture)for content-based image retrieval,more and more researchers have proposed image segmentation(based on color,texture,and shape)[7,8,22]as a main step toward an effective content-based image retrieval.Apparently,as long as a good segmentation is obtained,powerful constraints such as spatial relationships can be enforced.To overcome the dif?culties in achieving a sound segmentation,heuristics such as size-of-the-region and

222LI,ZA¨IANE,AND TAUBER

maximum-number-of-regions are naturally developed to avoid over-segmentation;also,mo-tion in the spatiotemporal domain is exploited to improve image segmentation in video[8]. This paper argues that,despite its popularity,the traditional image segmentation is not a good image preprocessing tool for the content-based retrieval from image and video databases.Instead,a new approach of feature localization is proposed.Based on the locales extracted from the feature localization,a three-step algorithm that employs color hypothesis, texture support,and shape veri?cation is developed.Since the?rst two steps enable the estimation of the size,orientation,and location of the object,the use of the generalized Hough transform(GHT)in the last step becomes very ef?cient and viable.

In contrast to most existing approaches,our work has the following characteristics:

(a)the use of color-channel-normalization for illuminant-invariant image and video retrieval,

(b)the exploitation of intrinsic and compact feature vectors and the three-step matching algorithm for search by object model,and(c)the exploration of feature localization in content-based image and video retrieval.

The remainder of this paper is organized as follows.In Section2,we describe the illumination-invariant color indexing technology based on chromaticity of the normalized images.The discussion of feature localization versus image segmentation is in Section3. Section4presents the modeling and matching techniques.Section5describes the experi-mental results.Section6summarizes our conclusions and future enhancements.

2.ILLUMINATION-INVARIANT COLOR INDEXING

In this section it is shown that a simple color indexing method that is ef?cient and invariant under illuminant change can be derived for search by illumination invariance if we store a representation of a chromaticity histogram for each image that is?rst normalized,reduced in size by a wavelet transformation,and then further reduced by going to the frequency domain and discarding higher-frequency DCT coef?cients.

2.1.Chromaticity Histogram of Color-Channel-Normalized Image

De?ne the chromaticity(r,g)for each pixel by[23]

r=R/(R+G+B),g=G/(R+G+B).(1) The chromaticity is the projection of an RGB triple onto the planar triangle joining unit distance along each of the color axes.It is important to note that,although the de?nition of the chromaticity immediately provides some normalization;i.e.,r+g+b=1if b is de?ned accordingly,and it provides invariance to the intensity of the light by dividing by the sum of the three color bands,the usage of chromaticity itself is insuf?cient for illumination invariance when the color(chrominance)of the light changes.

A color-channel-normalization method was proposed in[16].Given an image of size m×n,each of the RG

B channels is treated as a long vector of length m·n.It is shown in[16] that by employing an L2normalization on each of the three RGB vectors,the effect of any illumination change is approximately compensated.The color-channel-normalization step effectively accomplishes illumination invariance.The usage of chromaticity provides two additional advantages:(a)the color space is reduced from3D(e.g.,RGB,HSV,etc.)to2D, hence less computations;(b)the chromaticity value is indeed guaranteed to be in the range of[0,1.0].This helps the formation of a small(well-bounded)2D histogram space later.

IMAGE AND VIDEO RETRIEV AL 223

From the chromaticity image,a chromaticity histogram can be obtained.This histogram itself is viewed as a histogram-image ;i.e.,each bin value is viewed as a pixel value.Some image compression techniques will then be applied to this histogram image.Note that since r +g ≤1,the chromaticity entries must lie below the main diagonal;thus only half of the pixels in the histogram-image are used.

2.2.Histogram Intersection

In [14],a very useful histogram metric is developed,which we adopt in C-BIRD for both the usual color histogram matching and the uncompressed chromaticity histogram matching.Adapting Swain and Ballard’s de?nition to the present situation,we de?ne the intersection of chromaticity histograms H a and H b as

μ≡

i ,j

min {H a (i ,j ),H b (i ,j )}.(2)Swain and Ballard normalize intersection (or match)values by the number of pixels in the model histogram;thus,matches are between 0and 1.Alternatively,one can make the volume under each histogram equal to unity,effectively making each image have the same number of pixels and turning the histogram into a probability density.Time for histogram intersection is proportional to the number of histogram bins,and so it is very fast.

It can be shown that histogram intersection is equivalent to 1minus an L1distance,and so (1?μ)forms a metric δ,with δ=1?μ=(1/n ) |H a ?H b |,where n is the number of histogram bins.The utility of this metric is that it helps to alleviate the effects of noise in the following way.Suppose an image has signi?cant noise and it does not occur in the particular model image chromaticity histogram being compared to,then such noise values might tend to dominate,in an L2,squared-differences norm.Instead,here,zero occurrences in the model histogram are counted and such effects do not contribute to the metric.

Content-based retrieval proceeds by intersecting the chromaticity histogram of an un-known object with similar histograms precomputed and stored in a database.The highest value of μ,or in other words the smallest distance value (1?μ)indicates the database image that matches best.

2.3.Chromaticity Histogram-Image Compression

Chromaticity histogram matching without compression could be computationally inten-sive.We would like to recover an accurate approximation of histogram-images without sacri?cing ef?ciency.As a guiding principle it would also be sensible to maintain a lin-ear relationship between the histogram-image and its compressed representation.We can adhere to this principle by applying only linear operations while compressing the histogram-images.Therefore,here we ?rst apply a linear low-pass ?lter to both histogram-images,resulting in new histograms H and H .To best approximate the chromaticity histograms,the low-pass ?ltered histogram-images should approximate the original ones as closely as possible,yet be of lower resolution.The scaling function of bi-orthonormal wavelets,as a symmetrical low-pass ?lter,can be exploited to that end.Basically,scaling functions with more “taps”use polynomials of higher order to approximate the original function (the num-ber of taps is the number of nonzero coef?cients)[24].Our main concern is to capture the

224LI,ZA ¨IANE,AND TAUBER

most detail but in lower resolution.In [16],a good balance is achieved between ef?ciency and precision by using the symmetrical 9-tap ?lter.

After applying the scaling function several times to the original histogram-images,as-suming for simplicity square histogram-images with resolution 2n ×2n ,the size 16×16lower resolution histogram-images are obtained.Now consider the DCT:if we denote the result of H transformed via a DCT by H then,since the DCT is linear,we could con?dently index on H

.Since the lower frequencies in the DCT capture most of the energy of an image,after applying the DCT we can retain just the lower frequency coef?cients for histogram-image database indexing with fairly good accuracy—a very effective and ef?cient way of realizing a further low-pass ?ltering.By experiment [16]it is found that using only 36coef?cients worked well,these being those in the ?rst 36numbers in the upper left corner of the DCT coef?cient matrix.3

Denote by H d 36values derived from the ?rst 36DCT coef?cients.We index on the L2distance between H d for the model histogram-image and that for the test histogram-image.Populating the database,then,consists of calculating off-line the 36values H d ,viewed as indices for each model image.For image query,?rst the 36values for the query image are computed,thus obtaining H d ;then for every model image,the L2distance [ (H d ?H d )2]1/2is calculated.The model image minimizing the distance is taken to be a match for the query image.

Note that in this method only reduced,DCT transformed,quantized histogram-images are used—no inverse transforms are necessary and the indexing process is carried out entirely in the compressed domain.

The choice that the reduced resolution of the wavelet chromaticity histogram-images be 16×16and that the number of DCT coef?cients retained be 36is made quite empirically.Detailed analysis and some variation of the choice of the DCT coef?cients are provided in [16].

3.FEATURE LOCALIZATION vs IMAGE SEGMENTATION

3.1.De?nition of Region Revisited

Image segmentation is a process to segment an entire image into disjoint regions.A region consists of a set of pixels that share certain properties,e.g.,similar color (or gray-level intensity)and similar texture.As in [25],if R is a region,

1.R is connected ,iff all pixels in R are connected,4

2.R i ∩R j =φ,i =j ,

3.∪m k =1R k =I ,the entire image.

Although regions do not have to be connected,most available region-based and/or edge-based segmentation methods would yield connected regions,and it is error-prone to merge 3

Instead of using a conventional 8×8window for the DCT,a 16×16window is adopted.As a result,a ?ner resolution (twice as high as with 8×8)in the spatial frequency domain is realized.Since the low-pass ?ltering after DCT can only retain a limited number of coef?cients for ef?ciency,the net effect of having a larger (16×16)window is that a more detailed parameterized description at the lower end of the spectrum is facilitated.This is bene?cial when very low-resolution wavelet images are used for matching in our method.

4Either 4-connected or 8-connected.See [25]for more details.

IMAGE AND VIDEO RETRIEV AL225 some of them into nonconnected regions.In short,the traditional segmentation algorithms assume(1)regions are mostly connected,(2)regions are disjoint,(3)segmentation is complete in that any pixel will be assigned to some region and the union of all regions is the entire image.

Such a segmentation algorithm will yield more than a dozen purple regions,one for each character,for the title of the book shown in Fig.7A.It will also yield(unexpectedly)many white regions,since all the white blobs inside the letters“A,”“P,”“R,”“O”will unfortunately be identi?ed as regions unless some really effective algorithm can identify them as belonging to a nonconnected region,together with the two white boxes.The above example,albeit simple and not at all unusual,indicates that the traditional image segmentation does not yield useful grouping and representation for object recognition.

3.2.Locale for Feature Localization

We argue that a more useful and attainable process is feature localization that will identify features by their locality and proximity.A new concept locale is hence de?ned.Locales use squares of pixels(tiles)instead of pixels as their smallest unit at the image level.

D EFINITION.A locale L x is a local enclosure(or locality)of feature x.L x has the fol-lowing descriptors:

?envelope L x—a set of tiles to represent the locality of L x.

?geometric parameters—mass M(L x),centroid C(L x),eccentricity E(L x),and shape parameters for the locale,etc.

A tile is a square area in an image,its size is chosen as16×16pixels in this paper.Tile is the building-unit for envelopes.A tile is“red”if a suf?cient number of pixels(e.g.,10%) within the tile are red.It follows that a tile can be both“red”and“blue”if some of its pixels are red and some are blue.While pixel is the building-block for image segmentation,tile is the building-block for feature localization.Tiles are grouped into an envelope,if they are geometrically close.The closeness will be measured by eccentricity and distance to be discussed below.

Figure1shows a square image that has8×8tiles,two locales for color red,and one

in Fig.1,for example,consists of?ve tiles. locale for color blue.The envelope L1

red

FIG.1.An image of8×8tiles and locales for colors red and blue.

226LI,ZA¨IANE,AND TAUBER

M(L x)is the number of pixels in L x that actually have feature x,e.g.,the number of pixels that are red.M(L x)is usually less than the area of L x,although it could be equal to it.C(L x)is simply the centroid of the mass.E(L x)is a measure of the average distance from pixels in L x to the centroid;it measures the eccentricity of L x.Note,M,C,E,etc.are measured in unit of pixels,not in tiles.This guarantees the granularity.Hence the feature localization is not merely a low-resolution variation of image segmentation.

The procedure for generating the locales basically uses merge.First,simple statistics (M,C,E)is gathered within each tile.Afterwards,a method similar to“pyramid-linking”

[26]is used to merge the tiles into locales.In terms of the parent–child relation,the over-lapped pyramid is used.

Working bottom-up,all tiles having feature x are linked to their parent and merged into L x if the merged locale will have E(L x)<τ,whereτis a threshold normalized against M(L x). Otherwise,they will be linked to two different parents belonging to different envelopes L i x and L j x.During the merge,M(L x),C(L x),and E(L x)are updated accordingly.

From the above de?nition,it is important to note that in most cases the following are true:

1.(?x)L x is not connected,

2.(?x)(?y)L x∩L y=φ,x=y,

3.∪x L x=I,the entire image.

Namely,(1)pixels inside a locale for some feature are not necessarily connected, (2)locales are not always disjoint;their envelopes can be overlapped,(3)not all pixels in an image must be assigned to some locale in the feature localization process.

Locale is not simply a variant of nonconnected region,the main difference between locale and nonconnected region is illustrated by the above property(2).In the proposed feature localization,it is the approximate location that is identi?ed,not the precise membership as which pixel belongs to which region.The difference is not a philosophical one.If indeed only some simple process is to be applied,e.g.,template matching,then the precise membership of the region is important.In the domain of content-based image retrieval,where a very large amount of image and video data are processed,such simple and precise matches are not feasible.Instead,a more heuristic(evidential)process is going to be adopted which usually involves multiple features and their spatial relationships.For this purpose,it should be evident that the“blobby”locales are easier to extract and are more appropriate than regions formed by(connected)pixels.

Property(3)indicates that,unlike the image segmentation,the feature localization is https://www.doczj.com/doc/7712082754.html,e color localization as an example,we will likely not be interested in all colors or all color spots in an image,at least not some tiny noise spots.When only the locales of the few prominent colors are identi?ed,the union of them is not the whole image.

3.3.Speci?cs for Extracting Locales

The tasks involved in extracting locales are(1)image tile generation and(2)enve-lope growing.The following is a detailed description of the tasks for identifying color locales.

3.3.1.Image tile generation.Tile size was chosen to be16×16,because it is large enough to generate meaningful statistics for the underlying features,but also small enough

IMAGE AND VIDEO RETRIEV AL 227

to guarantee the granularity.5The choice of the tile size will,of course,be dependent on the image resolution.If a lower resolution image is chosen,the tile size can readily be reduced to,e.g.8×8.

Color pixels in a tile are classi?ed as of dominant color or transitional color .The tran-sitional color often occurs between two color regions,simply because of the smooth color transition in the sensory data.It could also be due to the anti-aliasing effect or noise.

The dominant color is identi?ed by comparing the intensity value of the current pixel to its eight immediate neighbors.The criterion is that it is not on a slope of rising/declining intensity values (with a chosen threshold 10).

While identifying dominant pixels,their geometric data are also gathered in the same pass.At this initial stage,each tile is considered as a locale:

?M (L f )=count of the pixels having feature f.?C (L f )= M (L f )i =1P /M (L f ),where P is the point coordinate.?E (L f )= M (L f )i =1((P x ?C x (L f ))2+(P y ?C y (L f ))2)/M (L f )= M (L f )i =1(P 2x +P 2y )/M (L f )?C x (L f )2?C y (L f )2,where C x ,C y ,P x ,P y are the x and y coordinates for C and P ,respectively.As shown above,the intermediate data generated is just M (L f )i =1(P 2x +P 2y

)which can be calculated ef?ciently in a progressive manner.

Dominant colors and associated geometric statistics are added to a tile color list.Dominant pixel geometry data is added to the ?rst element with color similar to it in the color list,and a weighted color average is performed on the list element to obtain a better color de?nition.If no element with similar color exists,then a new element is created in the list for that pixel containing only its color and geometry information.Similar colors are contained in the same volume set of a 32×32×32box in a 256×256×256RGB space.

After all the dominant colors have been added to the color list,the list is sorted by M (L f )in descending order,so that transitional colors have a chance to match ?rst to the most frequent dominant color.

Next,the pixels with transitional colors are being added to the tile color list.We compare every transitional pixel i against its neighborhood of 5×5pixels.If any of the neighbors have dominant colors then the neighbor pixel chosen is that which has a color of minimum Euclidean distance in RGB space from pixel i .The geometry statistics of pixel i are added to the color list element with the closest color to the chosen pixel,but the color information of pixel i is ignored,rather than being averaged with the list element color.If none of the neighbors of pixel i has dominant colors then the pixel i will be added to a transitional color list.

Finally,both color lists are checked for elements with color similar to other elements in the same list,which is possible because,after performing the color averaging,the color can gradually change to be similar to other colors in the list.All similar elements are merged together.However,there cannot be any similar elements between the two lists,so to merge the two color lists we only need to append the transition colors list to the end of the dominant colors list.

5

The size 16×16happens to be the size of the macroblocks for the MPEG motion vectors.Although not addressed in this paper,this has been proven convenient in our current work when motion parameters are brought in consideration for retrieving the MPEG videos.

228LI,ZA¨IANE,AND TAUBER

3.3.2.Envelope growing.Generating locales(or?nal envelopes)requires the use of a dynamic pyramid linking procedure.A4×4overlapped pyramid structure[26]is used, and parent nodes compete for links to child nodes in a fair competition.

The tile list for the image is considered as an enumeration of the pyramid child nodes, each containing a color list with associated geometry and envelope information.To obtain a fully linked pyramid and,therefore,a?nal color list for the single top-most parent node—which is a locales list,we apply a linking procedure iteratively until we reach the level with only one node.

P ROCEDURE.Envelope Growing by Pyramidal Linking.

begin

Initial Linking Step:

/*(Use2×2nonoverlapped pyramid in which each child has only one parent*/ For each child node

For each e∈{color list elements of the child node}

For each similar color element pe of the parent node

C=the eccentricity of merged e and pe

If C<τ(a pyramid level dependent threshold)

Merge the color and geometry statistics of e into pe;

Make a parent–child link between e and pe;

/*One link only for each e at this initial stage.*/

Break(from the last For loop);

If e is not linked

Create a new node pe in the parent’s color list;

Make a link between the child and the parent.

Link Updating Step:

/*Use4×4overlapped pyramid in which each child has four parents*/ Repeat until child-parent links do not change anymore

For each child node

For each e∈{color list elements of the child node}

Find all similar color elements pe’s from4parent nodes;

If merging with one of the pe’s yields a more compact locale

than the currently linked pe

Switch e’s parent to the new pe and update the statistics.

Finalization Step:

Merge similar colors and remove empty entries from parent list.

Go up one more level in the pyramid and repeat the above.

end

During each iteration in the link updating step,the parent list nodes must remain constant, so that the linking procedure is consistent in each iteration.Updating the geometry statistics is done on an additional tentative parent list,and after the iteration the parent list is updated. Obviously,merging criteria are needed so that the linking procedure will produce good spatial envelopes and terminate.We observe that if there is a closer choice in terms of actual pixel distance between two color list elements’centroids,it is visually and conceptually more likely to be a part of the same feature of the image.Thus,the criterion is compactness, and the competition is for the closest parent with similar features to a child.That would also

IMAGE AND VIDEO RETRIEV AL229 guarantee termination of the linking procedure,since the overall distance of all the nodes keeps getting smaller in at least pixel-size steps.It is possible that a closer parent would actually create a more disperse envelope than features are likely to exhibit,so we require that the normalized eccentricity be less than a threshold.The eccentricity is normalized by the shape and mass(M)of the color element.We analyze two extreme cases of acceptable shapes and mass in order to obtain an estimate of the magnitude of the normalized eccentricity. If the shape is a circular disk of radius r0then

E=

r0

r=0

2πr×r2

=1

2

(r0+1)2=

M

+

M

+1

2

.(3)

The normalized eccentricity is therefore approximately(1/M)E. If the shape is a bar-like region of width2x0+1and height1,

E=

x0

x=0

2×x2

2x0

=1

3

(x0+1)x0=

M2?1

12

.(4)

The normalized eccentricity is therefore approximately(1/M2)E. As derived,the circular disk’s normalized eccentricity is

?E(M)=1

M E=

1

+

1

πM

+1

2M

.(5)

We argue that the larger the locale is,the less sensitive it is to noise or holes,and therefore,

it should approach the most compact eccentricity possible,which is that of the circular

disk.So the above equation speci?es a threshold that is valid for all locale shapes with

signi?cant mass.Also,since small locales are more sensitive to noise and shape,the above

equation will not apply,so we would like to assign them a higher threshold to allow more

leniency in the https://www.doczj.com/doc/7712082754.html,ing?E(M)as a basis function,we multiply it by another function

G(M)of the form:G(M)=1+C1e?μ1M.This exponential function has the property that it

approaches1when M is very large;so the product would equal1/2π.However,when M is

small,the function exponentially increases the threshold required for a very compact region

based on the locale size.After multiplying,the function that we get is asymptotically,and

characteristically for small M,equivalent to1/2π+C2e?μ2M=1/2π+1.76e?0.00025M. The parameters(C,μ)were estimated,based on empirical data.

The merging procedure is simply an adjustment of the envelope and a geometric correc-

tion,which can be done because the intermediate statistics are retrievable from the?nal

statistics.

Locale extraction for all images in the database is not made at run time but before any

search query is submitted.Locales are essentially used for the search by object model

described in the next section.Locales for a given object model are extracted at run time

when the object is presented by the user.

4.SEARCH BY OBJECT MODEL

This section describes our method for search by object model.A recognition kernel[27]

is de?ned as a multiresolution model for each object.Features of an object are extracted at

levels that are most appropriate to yield only the necessary yet suf?cient details.Together

they form the kernel.Beside its multiresolution nature,which will not be emphasized in

230LI,ZA¨IANE,AND TAUBER

this paper,the recognition kernel encompasses intrinsic features such as color,texture,and shape which are vital for the retrieval of objects.The following two subsections describe our approaches to modeling and matching,respectively,and Section4.3provides further implementation details.

4.1.Color,Texture,and Shape in Object Modeling

1.Color.Colors in a model image are sorted according to their frequency(number of pixels)in the RGB color histogram.The?rst few(e.g.,?ve)are called most frequent colors (MFCs).When color is extracted from relatively low resolution images,where only very few prominent colors are preserved,the MFCs become especially dominant.

Locale(s)for each MFC in the object model will be extracted?rst.Each pair of the centroids for two of the MFC locales can be connected to produce an MFC vector.The length of the MFC vectors and the angles between them characterize the color distribution, size,and orientation of the object.To reduce the total number of MFC vectors,only the vectors that connect the?rst MFC centroid to the other MFC centroids are used.Hence,for k(k≥2)MFCs,the total number of MFC vectors is k?1.

For simplicity,the RGB color model is adopted.It suf?ces for the purpose of the content-based retrieval in C-BIRD.Alternatively,some luminance-chrominance color models(e.g., YUV,LUV)can be used which would reduce the representational redundancy present in the RGB model.

2.Texture.As the color histogram can be de?ned in a3D space(RGB,LUV,etc.),texture histogram can also be de?ned in a3D or2D space.Two of the Tamura texture measures are coarseness and directionality[28].Recent studies[29,30]also suggest that they are among the few most effective perceptual dimensions in discriminating texture patterns.

In our edge-based approach,the directionality is simply measured by the gradient direc-tionφof the edge pixels.It is especially useful in handling rotations.When an object is rotated on a2D plane(e.g.,a book is placed on the desk with a different orientation),all the edge orientations are simply incremented(or decremented)by aθ.The coarseness can be characterized by edge separation which is measured by the distance of the nearest edge pixel along the direction ofφ.Apparently,the edge separation is sensitive to the scale/resolution of the images.

The current C-BIRD implementation uses a2D texture space which is composed of S (edge separation)andφ(directionality).The texture statistics are extracted for each locale; in other words,they are locale-based.They are derived from the edge image of the luminance image Y,where Y=0.299R+0.587G+0.114B.

3.Shape.The generalized Hough transform(GHT)[31]is adopted to represent the shape of the object.Brie?y,each edge point in the object model is represented by a vector r i(ψi,r i) connecting the edge point to a chosen reference point for the object.All r i’s are stored in an R-table which serves as an object model.The R-table is indexed by the edge orientation φi of the edge point.At the matching time,each edge point in the database image uses the R-table to cast its vote to an accumulator array.As a result,a peak will be formed at the location of the corresponding reference point in the accumulator array if the object indeed appears in the database image.

The major advantage of the GHT(and its variants)over other shape representations[32] is its insensitivity to noise and occlusion[33,34].It can also be applied hierarchically to describe the object(or a portion of the object)at multiple resolutions.It is known that the

IMAGE AND VIDEO RETRIEV AL231 discriminative power of the GHT diminishes when the aim is to recognize the object at all possible scales and orientations,because then the GHT matching will have to be attempted at numerous itertations and the decision often becomes unattainable.We will hence propose the following three-step matching algorithm in which the GHT will only be applied after the ?rst two steps when a certain hypothesis of a possible object size,orientation,and location is made.

4.2.The Three-step Matching Algorithm

A three-step matching algorithm for searching by object models in image and video databases is developed,i.e.(1)color hypothesis,(2)texture support,(3)shape veri?cation. It is generally accepted that color is fairly invariant to scaling and rotation;hence the feature color is used in the?rst step of the matching.After color localization,a hypothesis of the exis-tence of an object at a certain location,size,and orientation can be made.If there is a suf?cient similarity in their texture between the object model and the image at the vicinity of the hy-pothesized enclosure,then a shape veri?cation procedure based on the GHT will be invoked. For both color and shape,there is an issue of similarity.It is dealt with effectively using the MFC vectors and the geometric and texture parameters of the locales.First,if the model object appears in the image with exactly the same size and orientation,then the mass M, eccentricity E of each locale,the lengthρi and orientationαi of each MFC vector,and the anglesβj between the pairs of the MFC vectors are all identical,whether they are extracted from the model or from the object in the image.Second,if the object in the image has a different size and/or orientation,then M andρi should be scaled according to the size ratio,αi should be incremented by a rotational angleθ,whereasβj would remain the same. Certain tolerance for error is implemented to support the similarity.In summary,we have the following matching algorithm.

M A TCHING A LGORITHM.

begin

/*Image tile generation*/

Within each16×16tile of an image

Gather M,C,E for each MFC associated with the object model;

/*Color localization*/

Use overlapped pyramid linking to group tiles into locale L’s for each MFC;

/*Color hypothesis*/

If(#-of-similar-color-locales≥2)and their MFC-vectors

are‘similar’to the MFC-vectors in the model

Make hypothesis of size,orientation,and bounding-box of a matching object;

/*Texture support*/

For all locales of the hypothesized matching object

if texture measures are consistent with the hypothesized size and orientation Proceed to check the shape using the GHT;

/*Shape veri?cation*/

Within(and at the vicinity of)the hypothesized bounding-box

All edge pixels use R-table of the(rotated/scaled)object model to vote;

If#-of-votes near the reference point exceeds a chosen threshold

Con?rm the detection of the object;

end

232LI,ZA ¨IANE,AND TAUBER

4.3.Speci?cs for Search by Object Model

4.3.1.Locale-based texture measure.Texture statistics is gathered under each envelope/locale.We use edge detection and separation algorithms on the full-resolution im-age.No downsampling is applied at this step because texture is very resolution-dependent.To generate an edge-map we use the Sobel edge operators and a nonmaxima edge suppres-sion.A global threshold for edge detection applying to the entire image often yields poor results.In particular,the amount of edges for the object model could be severely affected by the possibly varying background,which will directly affect the texture measures.In C-BIRD,edge detection is conducted within individual locales (and their immediately sur-rounding areas to detect the possible boundary edge pixels of the locales).We threshold the edge-map using the median of the intensity values of the edge pixels inside each envelope.Application of the local threshold for each locale improves the consistency of the edge detection and,hence,the quality of the texture measure.

To generate the S (edge separation)information,for every edge pixel in an envelope we measure the pixel distance from it along its gradient line to the closest edge pixel inside the same envelope that has a similar gradient angle within a threshold of 15?.The distances from both sides of the pixel (along φand φ+180?)are taken into account.If there is no other edge pixel along the gradient line,then the separation distance is “in?nity.”Also,if the separation is larger than a speci?ed maximum (192pixels for 640×480images),then S is considered to be “in?nity.”

A histogram of S (edge separation)versus φ(edge gradient angle/directionality)is created for the texture measure of each locale.The texture histograms is normalized by simply using percentage values.Initially,the size of the histograms is 193×180,where S =193refers to in?nity.For ef?ciency,the histograms are later greatly subsampled.To reduce the impact of noises and the inevitable deviations between the model and the object in the database,the texture histograms are also smoothed by applying a Gaussian operator.The resulting histogram is quantized at 9×8,where S axis is divided into eight cells plus the ninth cell for the in?nity,and the φaxis is divided into eight cells.

4.3.2.Estimation of scale and rotation and execution of the GHT.The database images are screened by considering only the images that have all the locale colors of the model

image.Let M (L m MFC 1),M (L m MFC i ),M (L db MFC 1),and M (L db MFC i )denote the mass of the ?rst and i th MFCs from the model image and the database image,respectively,M FV m i ?1and

M FV db i ?1the (i ?1)th vectors that connect the centroids of these MFC locales.For each pair of the MFC vectors from the model image and the database images,the following are checked to determine whether an hypothesis of the existence of an object with a certain scale and orientation is warranted:If M (L db MFC 1)/M (L m MFC 1)=k ,then M L db MFC i M L m MFC i

≈k ; M FV db i ?1 M FV m i ?1 ≈k ;if α(M FV db 1)?α(M FV m 1)=θthen

α M FV db i ?1 ?α M FV m i ?1 ≈θ.

The “≈”symbol allows error tolerance of the above measures.A simple weighted average error is used to determined whether it passes the threshold for the “color hypothesis”step.

IMAGE AND VIDEO RETRIEV AL233 If successful,then the weighted average scale factorˉk and rotationˉθare employed as the hypothesized scale and rotation factors.

The texture histograms for each pair of the matching locales from the model image and the database image are compared after the adjustment according toˉk andˉθ.Namely,in the database texture histogram the angle value is incremented byˉθ,and the separation value is multiplied byˉk.

As in the color histogram matching(Eq.(2)),the texture histograms H m i,and H db i(ˉk,ˉθ) are matched by histogram intersection,i.e.,by taking the sum of the minimum of the texture

histograms,

ν≡

min

H m i,H db i(ˉk,ˉθ)

.(6)

Ifν>thresholdτ0,then the“color hypothesis”has the“texture support.”

The implementation of the generalized Hough transform(GHT)is fairly straightforward [31],except

1.the GHT is only performed on a portion of the database image at the location containing all of the matched locales to save time;

2.the voting for the GHT is only for the single scale and rotationˉk andˉθ.

After the GHT voting,the accumulate array is smoothed(i.e.,every5×5neighborhood is averaged)to aid the peak allocation.The maximum value in the accumulate array is then located.If it exceeds the threshold(50%of total edges in the R-table for the object,adjusted byˉk),then its location indicates the location of the matched object in the database image.

5.EXPERIMENTAL RESULTS

This section demonstrates some of our preliminary results.

5.1.General Descriptions on System Design

The C-BIRD system has been implemented on both Unix and PC platforms.On the plat-forms,we used the same search engine and preprocessor written in C++.The user interface is implemented in Perl and HTML as a Web application using any Web browser,as well as a java applet.Figure2shows the general architecture for C-BIRD implementation.The system is accessible from http://jupiter.cs.sfu.ca/cbird/cbird.cgi and http://jupiter.cs.sfu.ca/cbird/ java/(IE4.0or Netscape4.0).The C-BIRD system rests on four major components:?extraction of images from the WWW(Image Excavator);

?processing of images to extract image features and storing precomputed data in a database(Pre-Processor);

?querying(User Interface);

?matching query with image features in the database(Search Engine).

The Image Excavator extracts images from an image repository.This repository can be the WWW space;in such case,the process crawls the Web searching for images,or a set of still images on disk or CD-ROM.Frames can also be extracted from video streams using cut-detection algorithms[35,18,6]and processed as still images.Once images are extracted from the repository,they are given as input to the image analyzer(C-BIRD preprocessor) that extracts visual content features like color and edge characteristics.These visual features,

234LI,ZA¨IANE,AND TAUBER

FIG.2.(a)C-BIRD general architecture.(b)Excavator:the Web-crawling process for image extraction. along with the context feature like image URL,parent URL,keywords,etc.,extracted with the Image Excavator,are stored in a database.The collection of images and the extraction of image features are processes that are done off-line before queries are submitted.When a query is submitted,accessing the original data in the image repository is not necessary. Only the precomputed data stored in the database is used for image feature matching.This makes C-BIRD more scalable and allows fast query responses for a large number of users and a huge set of images.

We have implemented several types of searches and any combinations of them in C-BIRD:

1.search by conjunctions and disjunctions of keywords;

2.search by color histogram:similarity with color histogram in a sample image.Color can also be speci?ed in percentage within the image or layout in a1×1,2×2,4×4,or 8×8grid;

IMAGE AND VIDEO RETRIEV AL235 3.search by texture:texture here is characterized by edge density and orientation;its layout can also be speci?ed;

4.search by illumination invariance:similarity with color chromaticity using color-channel-normalized images;

5.search by object model:speci?cation of an object to look for in images.

The left of Fig.3a shows the user interface using Netscape to browse the image repository or the image set resulting from a query.While browsing,users can submit a query by image similarity.The right of Fig.3a shows a user interface to specify color layout for a given query.Figure3b shows an example of an output(images and their associated keywords) from the Image Excavator after parsing a web page.

The Image Excavator is a web crawler that we built to follow links from web page to web page in search of images.The text present in each web page is parsed and some representative keywords are retained and associated to the images found in the web page. The keyword extraction process uses a semantic network of English words and builds con-cept hierarchies with the selected words.The process of keyword extraction is explained in [36,37].

The database used by C-BIRD contains mainly meta-data extracted by the preprocessor and the Image Excavator.As explained above,only features collected in this database at preprocessing time,are used by the search engine for image or image feature matching. During run time,minimal processing is done.For each image collected,the database contains some description information,a feature descriptor,and a layout descriptor,as well as a set of multiresolution subimages(i.e.,search windows)feature descriptors.Neither the original image nor the subimages are directly stored in the database but only their feature descriptors.

We use our illuminance invariant method to detect cuts in videos and to segment a video into clips(frame sequences).The starting time and duration of the image sequence are stored with the meta-data.While the thumbnail is generated from the middle frame of the clip,color,and texture features are extracted from all frames.

The current test database has over1300images.The meta-data is stored in a SQL server running on a Pentium-II333-MHz PC with128MB of RAM.Search times are in the order of0.1to2s,depending upon the type of search,except for the search by object,which may take up to10s.

Figure4demonstrates the use of conjunction of different searches,content-based and description-based.Figure4a is the top-20matches of a query based on the color layout where the top cells were blue(i.e.,for blue sky).Figure4b is the result for a combination of content-based and description-based query,with“blue sky”speci?ed as for Fig.4a and an additional keyword“airplane.”Figure4c is the result of the query“blue sky and green grassland”speci?ed with a color layout grid with the top cells blue,the bottom cells green and a medium edge density.

5.2.Search by Illumination Invariance

The experimental results for search by illumination invariance are very promising. Figure5provides a comparison between the ordinary search by color histogram and our search by illumination invariance.The image sample selected for both searches is the?rst T-shirt image which is taken under a dim bluish light.The entire database is searched and the ?rst15matches(sometimes mismatches)are shown in descending order of their matching

236LI,ZA¨IANE,AND TAUBER

FIG.3.(a)C-BIRD Web user interface.(b)Output from the Image Excavator.

IMAGE AND VIDEO RETRIEV AL237

FIG.4.Conjuction of searches.

scores.As expected,the by-color-histogram method(Fig.5a)is only capable of turning out many dark images,of which the third image happens to be a correct match(the same T-shirt being folded slightly differently and rotated).However,its matching score ranks behind a book.The result of by-illumination-invariance shown in Fig.5b is far better.All three occurrences of the sample T-shirt,the third one under a redish light,are found.Notably,it also?nds many T-shirts under various illuminations.Since the sample T-shirt has basically two colors(dark stripes on white cloth),the matches are mostly correct in terms of their chromaticities,albeit unexpected.

Figure6depicts some selected frames from a clip of“gold?sh”scene in a3-min video that contains22cuts/clips.Because the camera was chasing the?sh,the re?ectance changes signi?cantly.By selecting the threshold very carefully,the color histogram method still missed one cut and mistakenly added three more cuts(one of them at the third frame in the gold?sh clip shown).As shown in[18]the color histogram is simply not able to satisfy both the precision and recall in video segmentation.Our illumination invariant method,however, detects all cuts correctly,using a?xed threshold which works for other test videos as well.

5.3.Search by Object Model

5.3.1.Locale construction.The locale construction algorithm is implemented in the C-BIRD system.Its threshold parameters are chosen to be optimized for the average object size,although they also work well for objects up to?ve times the magnitude scale.The

238LI,ZA¨IANE,AND TAUBER

https://www.doczj.com/doc/7712082754.html,parison of two results:(a)result of search by color histogram;(b)result of search by illumination invariance.

FIG.11.Result of the three-step matching.

最小二乘法及其应用..

最小二乘法及其应用 1. 引言 最小二乘法在19世纪初发明后,很快得到欧洲一些国家的天文学家和测地学家的广泛关注。据不完全统计,自1805年至1864年的60年间,有关最小二乘法的研究论文达256篇,一些百科全书包括1837年出版的大不列颠百科全书第7版,亦收入有关方法的介绍。同时,误差的分布是“正态”的,也立刻得到天文学家的关注及大量经验的支持。如贝塞尔( F. W. Bessel, 1784—1846)对几百颗星球作了三组观测,并比较了按照正态规律在给定范围内的理论误差值和实际值,对比表明它们非常接近一致。拉普拉斯在1810年也给出了正态规律的一个新的理论推导并写入其《分析概论》中。正态分布作为一种统计模型,在19世纪极为流行,一些学者甚至把19世纪的数理统计学称为正态分布的统治时代。在其影响下,最小二乘法也脱出测量数据意义之外而发展成为一个包罗极大,应用及其广泛的统计模型。到20世纪正态小样本理论充分发展后,高斯研究成果的影响更加显著。最小二乘法不仅是19世纪最重要的统计方法,而且还可以称为数理统计学之灵魂。相关回归分析、方差分析和线性模型理论等数理统计学的几大分支都以最小二乘法为理论基础。正如美国统计学家斯蒂格勒( S. M. Stigler)所说,“最小二乘法之于数理统计学犹如微积分之于数学”。最小二乘法是参数回归的最基本得方法所以研究最小二乘法原理及其应用对于统计的学习有很重要的意义。 2. 最小二乘法 所谓最小二乘法就是:选择参数10,b b ,使得全部观测的残差平方和最小. 用数学公式表示为: 21022)()(m in i i i i i x b b Y Y Y e --=-=∑∑∑∧ 为了说明这个方法,先解释一下最小二乘原理,以一元线性回归方程为例. i i i x B B Y μ++=10 (一元线性回归方程)

数据库死锁问题总结

数据库死锁问题总结 1、死锁(Deadlock) 所谓死锁:是指两个或两个以上的进程在执行过程中,因争夺资源而造 成的一种互相等待的现象,若无外力作用,它们都将无法推进下去。此时称系 统处于死锁状态或系统产生了死锁,这些永远在互相等待的进程称为死锁进程。由于资源占用是互斥的,当某个进程提出申请资源后,使得有关进程在无外力 协助下,永远分配不到必需的资源而无法继续运行,这就产生了一种特殊现象 死锁。一种情形,此时执行程序中两个或多个线程发生永久堵塞(等待),每 个线程都在等待被其他线程占用并堵塞了的资源。例如,如果线程A锁住了记 录1并等待记录2,而线程B锁住了记录2并等待记录1,这样两个线程就发 生了死锁现象。计算机系统中,如果系统的资源分配策略不当,更常见的可能是 程序员写的程序有错误等,则会导致进程因竞争资源不当而产生死锁的现象。 锁有多种实现方式,比如意向锁,共享-排他锁,锁表,树形协议,时间戳协 议等等。锁还有多种粒度,比如可以在表上加锁,也可以在记录上加锁。(回滚 一个,让另一个进程顺利进行) 产生死锁的原因主要是: (1)系统资源不足。 (2)进程运行推进的顺序不合适。 (3)资源分配不当等。 如果系统资源充足,进程的资源请求都能够得到满足,死锁出现的可能 性就很低,否则就会因争夺有限的资源而陷入死锁。其次,进程运行推进顺序 与速度不同,也可能产生死锁。 产生死锁的四个必要条件: (1)互斥条件:一个资源每次只能被一个进程使用。 (2)请求与保持条件:一个进程因请求资源而阻塞时,对已获得的资源保持不放。 破解:静态分配(分配全部资源) (3)不剥夺条件:进程已获得的资源,在末使用完之前,不能强行剥夺。 破解:可剥夺 (4)循环等待条件:若干进程之间形成一种头尾相接的循环等待资源关系。 破解:有序分配 这四个条件是死锁的必要条件,只要系统发生死锁,这些条件必然成立,而只要上述条件之一不满足,就不会发生死锁。 死锁的预防和解除:

我的java基础题和答案详解

If语句相关训练 1. (标识符命名)下面几个变量中,那些是对的那些是错的错的请说明理由(CDF) A. ILoveJava B. $20 C. learn@java D. E. Hello_World F. 2tigers 答:标识符中不能有@,不能含有点号,开头只能是字母和$ 2. (Java 程序的编译与运行)假设有如下程序: package public class HelloWorld{ public static void main(String args[]){ "Hello World"); } } 问: 1)假设这个代码存在文件中,那这个程序能够编译通过为什么 如果编译不通过,应该如何改进 答:不能,含有public的类文件名必须要和类名一致;应将改写成 2)假设这个.java 文件放在C:\javafile\目录下,CLASSPATH=.,则生成的.class 文件应该放在什么目录下如何运行 答:.class应该存放在C:\javafile\目录下 3. (if 语句)读入一个整数,判断其是奇数还是偶数 public class Test { int n; If(n%2==0){ 是偶数”); }else{ 是奇数”); } } 4. (操作符)有如下代码: int a = 5; int b = (a++) + (--a) +(++a); 问执行完之后,b 的结果是多少 答:16 解析a先把5赋值给b让后再自增1相当于(b=5+(--6)+(++5))

5. (基本类型的运算)一家商场在举行打折促销,所有商品都进行8 折优惠。一 位程序员把这个逻辑写成: short price = ...; (操作符)有如下代码: a = (a>b)a:b; 请问这段代码完成了什么功能。 答:这段代码的作用是取最大值,当a>b成立时,a=a;当a>b不成立时,a=b; 8. (if 语句)读入一个整数,表示一个人的年龄。如果小于6 岁,则输出“儿童”,6 岁到13 岁,输出“少儿”;14 岁到18 岁,输出“青少年”;18 岁到35 岁,输出“青年”;35 岁到50 岁,输出“中年”;50 岁以上输出“中老年”。 答:public class AgeTest { public static void main(String[] args) { int n=12; if(n<6){

matlab常用对象操作

、常用对象操作:除了一般windows窗口的常用功能键外。 1、!dir 可以查看当前工作目录的文件。!dir& 可以在dos状态下查看。 2、who 可以查看当前工作空间变量名, whos 可以查看变量名细节。 3、功能键: 功能键快捷键说明 方向上键Ctrl+P 返回前一行输入 方向下键Ctrl+N 返回下一行输入 方向左键Ctrl+B 光标向后移一个字符 方向右键Ctrl+F 光标向前移一个字符 Ctrl+方向右键 Ctrl+R 光标向右移一个字符 Ctrl+方向左键 Ctrl+L 光标向左移一个字符 home Ctrl+A 光标移到行首 End Ctrl+E 光标移到行尾 Esc Ctrl+U 清除一行 Del Ctrl+D 清除光标所在的字符 Backspace Ctrl+H 删除光标前一个字符 Ctrl+K 删除到行尾 Ctrl+C 中断正在执行的命令 4、clc可以命令窗口显示的内容,但并不清除工作空间。 二、函数及运算 1、运算符: +:加,-:减, *:乘, /:除,\:左除 ^:幂,':复数的共轭转置,():制定运算顺序。 2、常用函数表: sin( ) 正弦(变量为弧度) Cot( ) 余切(变量为弧度) sind( ) 正弦(变量为度数) Cotd( ) 余切(变量为度数) asin( ) 反正弦(返回弧度) acot( ) 反余切(返回弧度) Asind( ) 反正弦(返回度数) acotd( ) 反余切(返回度数) cos( ) 余弦(变量为弧度) exp( ) 指数 cosd( ) 余弦(变量为度数) log( ) 对数 acos( ) 余正弦(返回弧度) log10( ) 以10为底对数 acosd( ) 余正弦(返回度数) sqrt( ) 开方 tan( ) 正切(变量为弧度) realsqrt( ) 返回非负根 tand( ) 正切(变量为度数) abs( ) 取绝对值

1、曲线拟合及其应用综述

曲线拟合及其应用综述 摘要:本文首先分析了曲线拟合方法的背景及在各个领域中的应用,然后详细介绍了曲线拟合方法的基本原理及实现方法,并结合一个具体实例,分析了曲线拟合方法在柴油机故障诊断中的应用,最后对全文内容进行了总结,并对曲线拟合方法的发展进行了思考和展望。 关键词:曲线拟合最小二乘法故障模式识别柴油机故障诊断 1背景及应用 在科学技术的许多领域中,常常需要根据实际测试所得到的一系列数据,求出它们的函数关系。理论上讲,可以根据插值原则构造n 次多项式Pn(x),使得Pn(x)在各测试点的数据正好通过实测点。可是, 在一般情况下,我们为了尽量反映实际情况而采集了很多样点,造成了插值多项式Pn(x)的次数很高,这不仅增大了计算量,而且影响了函数的逼近程度;再就是由于插值多项式经过每一实测样点,这样就会保留测量误差,从而影响逼近函数的精度,不易反映实际的函数关系。因此,我们一般根据已知实际测试样点,找出被测试量之间的函数关系,使得找出的近似函数曲线能够充分反映实际测试量之间的关系,这就是曲线拟合。 曲线拟合技术在图像处理、逆向工程、计算机辅助设计以及测试数据的处理显示及故障模式诊断等领域中都得到了广泛的应用。 2 基本原理 2.1 曲线拟合的定义 解决曲线拟合问题常用的方法有很多,总体上可以分为两大类:一类是有理论模型的曲线拟合,也就是由与数据的背景资料规律相适应的解析表达式约束的曲线拟合;另一类是无理论模型的曲线拟合,也就是由几何方法或神经网络的拓扑结构确定数据关系的曲线拟合。 2.2 曲线拟合的方法 解决曲线拟合问题常用的方法有很多,总体上可以分为两大类:一类是有理论模型的曲线拟合,也就是由与数据的背景资料规律相适应的解析表达式约束的曲线拟合;另一类是无理论模型的曲线拟合,也就是由几何方法或神经网络的拓扑结构确定数据关系的曲线拟合。 2.2.1 有理论模型的曲线拟合 有理论模型的曲线拟合适用于处理有一定背景资料、规律性较强的拟合问题。通过实验或者观测得到的数据对(x i,y i)(i=1,2, …,n),可以用与背景资料规律相适应的解析表达式y=f(x,c)来反映x、y之间的依赖关系,y=f(x,c)称为拟合的理论模型,式中c=c0,c1,…c n是待定参数。当c在f中线性出现时,称为线性模型,否则称为非线性模型。有许多衡量拟合优度的标准,最常用的方法是最小二乘法。 2.2.1.1 线性模型的曲线拟合 线性模型中与背景资料相适应的解析表达式为: ε β β+ + =x y 1 (1) 式中,β0,β1未知参数,ε服从N(0,σ2)。 将n个实验点分别带入表达式(1)得到: i i i x yε β β+ + = 1 (2) 式中i=1,2,…n,ε1, ε2,…, εn相互独立并且服从N(0,σ2)。 根据最小二乘原理,拟合得到的参数应使曲线与试验点之间的误差的平方和达到最小,也就是使如下的目标函数达到最小: 2 1 1 ) ( i i n i i x y Jε β β- - - =∑ = (3) 将试验点数据点入之后,求目标函数的最大值问题就变成了求取使目标函数对待求参数的偏导数为零时的参数值问题,即: ) ( 2 1 1 = - - - - = ? ?∑ = i i n i i x y J ε β β β (4)

死锁问题解决方法

Sqlcode -244 死锁问题解决 版本说明 事件日期作者说明 创建09年4月16日Alan 创建文档 一、分析产生死锁的原因 这个问题通常是因为锁表产生的。要么是多个用户同时访问数据库导致该问题,要么是因为某个进程死了以后资源未释放导致的。 如果是前一种情况,可以考虑将数据库表的锁级别改为行锁,来减少撞锁的机会;或在应用程序中,用set lock mode wait 3这样的语句,在撞锁后等待若干秒重试。 如果是后一种情况,可以在数据库端用onstat -g ses/onstat -g sql/onstat -k等命令找出锁表的进程,用onmode -z命令结束进程;如果不行,就需要重新启动数据库来释放资源。 二、方法一 onmode -u 将数据库服务器强行进入单用户模式,来释放被锁的表。注意:生产环境不适合。 三、方法二 1、onstat -k |grep HDR+X 说明:HDR+X为排他锁,HDR 头,X 互斥。返回信息里面的owner项是正持有锁的线程的共享内存地址。 2、onstat -u |grep c60a363c 说明:c60a363c为1中查到的owner内容。sessid是会话标识符编号。 3、onstat -g ses 20287 说明:20287为2中查到的sessid内容。Pid为与此会话的前端关联的进程标识符。 4、onstat -g sql 20287

说明:20287为2中查到的sessid内容。通过上面的命令可以查看执行的sql语句。 5、ps -ef |grep 409918 说明:409918为4中查到的pid内容。由此,我们可以得到锁表的进程。可以根据锁表进程的重要程度采取相应的处理方法。对于重要且该进程可以自动重联数据库的进程,可以用onmode -z sessid的方法杀掉锁表session。否则也可以直接杀掉锁表的进程 kill -9 pid。 四、避免锁表频繁发生的方法 4.1将页锁改为行锁 1、执行下面sql语句可以查询当前库中所有为页锁的表名: select tabname from systables where locklevel='P' and tabid > 99 2、执行下面语句将页锁改为行锁 alter table tabname lock mode(row) 4.2统计更新 UPDATE STATISTICS; 4.3修改数据库配置onconfig OPTCOMPIND参数帮助优化程序为应用选择合适的访问方法。 ?如果OPTCOMPIND等于0,优化程序给予现存索引优先权,即使在表扫描比较快时。 ?如果OPTCOMPIND设置为1,给定查询的隔离级设置为Repeatable Read时,优化程序才使用索引。 ?如果OPTCOMPIND等于2,优化程序选择基于开销选择查询方式。,即使表扫描可以临时锁定整个表。 *建议设置:OPTCOMPIND 0 # To hint the optimizer 五、起停informix数据库 停掉informix数据库 onmode -ky 启动informix数据库 oninit 注意千万别加-i参数,这样会初始化表空间,造成数据完全丢失且无法挽回。

精选30道Java笔试题解答

都是一些非常非常基础的题,是我最近参加各大IT公司笔试后靠记忆记下来的,经过整理献给与我一样参加各大IT校园招聘的同学们,纯考Java基础功底,老手们就不用进来了,免得笑话我们这些未出校门的孩纸们,但是IT公司就喜欢考这些基础的东西,所以为了能进大公司就~~~当复习期末考吧。花了不少时间整理,在整理过程中也学到了很多东西,请大家认真对待每一题~~~ 下面都是我自己的答案非官方,仅供参考,如果有疑问或错误请一定要提出来,大家一起进步啦~~~ 1. 下面哪些是Thread类的方法() A start() B run() C exit() D getPriority() 答案:ABD 解析:看Java API docs吧:https://www.doczj.com/doc/7712082754.html,/javase/7/docs/api/,exit()是System类的方法,如System.exit(0)。 2. 下面关于https://www.doczj.com/doc/7712082754.html,ng.Exception类的说法正确的是() A 继承自Throwable B Serialable CD 不记得,反正不正确 答案:A 解析:Java异常的基类为https://www.doczj.com/doc/7712082754.html,ng.Throwable,https://www.doczj.com/doc/7712082754.html,ng.Error和https://www.doczj.com/doc/7712082754.html,ng.Exception继承Throwable,RuntimeException和其它的Exception等继承Exception,具体的RuntimeException继承RuntimeException。扩展:错误和异常的区别(Error vs Exception) 1) https://www.doczj.com/doc/7712082754.html,ng.Error: Throwable的子类,用于标记严重错误。合理的应用程序不应该去try/catch这种错误。绝大多数的错误都是非正常的,就根本不该出现的。 https://www.doczj.com/doc/7712082754.html,ng.Exception: Throwable的子类,用于指示一种合理的程序想去catch的条件。即它仅仅是一种程序运行条件,而非严重错误,并且鼓励用户程序去catch它。 2) Error和RuntimeException及其子类都是未检查的异常(unchecked exceptions),而所有其他的Exception 类都是检查了的异常(checked exceptions). checked exceptions: 通常是从一个可以恢复的程序中抛出来的,并且最好能够从这种异常中使用程序恢复。比如FileNotFoundException, ParseException等。 unchecked exceptions: 通常是如果一切正常的话本不该发生的异常,但是的确发生了。比如ArrayIndexOutOfBoundException, ClassCastException等。从语言本身的角度讲,程序不该去catch这类异常,虽然能够从诸如RuntimeException这样的异常中catch并恢复,但是并不鼓励终端程序员这么做,因为完全没要必要。因为这类错误本身就是bug,应该被修复,出现此类错误时程序就应该立即停止执行。因此, 面对Errors和unchecked exceptions应该让程序自动终止执行,程序员不该做诸如try/catch这样的事情,而是应该查明原因,修改代码逻辑。 RuntimeException:RuntimeException体系包括错误的类型转换、数组越界访问和试图访问空指针等等。

最小二乘法原理及应用【文献综述】

毕业论文文献综述 信息与计算科学 最小二乘法的原理及应用 一、国内外状况 国际统计学会第56届大会于2007年8月22-29日在美丽的大西洋海滨城市、葡萄牙首都里斯本如期召开。应大会组委会的邀请,以会长李德水为团长的中国统计学会代表团一行29人注册参加了这次大会。北京市统计学会、山东省统计学会,分别组团参加了这次大会。中国统计界(不含港澳台地区)共有58名代表参加了这次盛会。本届大会的特邀论文会议共涉及94个主题,每个主题一般至少有3-5位代表做学术演讲和讨论。通过对大会论文按研究内容进行归纳,特邀论文大致可以分为四类:即数理统计,经济、社会统计和官方统计,统计教育和统计应用。 数理统计方面。数理统计作为统计科学的一个重要部分,特别是随机过程和回归分析依然展现着古老理论的活力,一直受到统计界的重视并吸引着众多的研究者。本届大会也不例外。 二、进展情况 数理统计学19世纪的数理统计学史, 就是最小二乘法向各个应用领域拓展的历史席卷了统计大部分应用的几个分支——相关回归分析, 方差分析和线性模型理论等, 其灵魂都在于最小二乘法; 不少近代的统计学研究是在此法的基础上衍生出来, 作为其进一步发展或纠正其不足之处而采取的对策, 这包括回归分析中一系列修正最小二乘法而导致的估计方法。 数理统计学的发展大致可分 3 个时期。① 20 世纪以前。这个时期又可分成两段,大致上可以把高斯和勒让德关于最小二乘法用于观测数据的误差分析的工作作为分界线,前段属萌芽时期,基本上没有超出描述性统计量的范围。后一阶段可算作是数理统计学的幼年阶段。首先,强调了推断的地位,而摆脱了单纯描述的性质。由于高斯等的工作揭示了最小二乘法的重要性,学者们普遍认为,在实际问题中遇见的几乎所有的连续变量,都可以满意地用最小二乘法来刻画。这种观点使关于最小二乘法得到了深入的发展,②20世纪初到第二次世界大战结束。这是数理统计学蓬勃发展达到成熟的时期。许多重要的基本观点和方法,以及数理统计学的主要分支学科,都是在这个时期建立和发展起来的。这个时期的成就,包含了至今仍在广泛使用的大多数统计方法。在其发展中,以英国统计学家、生物学家费希尔为代表的英国学派起了主导作用。③战后时期。这一时期中,数理统计学在应用和理论两方面继续获得很大的进展。

【VIP专享】Java中String类的方法详解

Java中String类的方法详解 JJava中String 类的方法及说明 String : 字符串类型 一、构造函数 String(byte[ ]bytes ):通过byte数组构造字符串对象。 String(char[ ]value ):通过char数组构造字符串对象。 String(Sting original ):构造一个original的副本。即:拷贝一个original。 String(StringBuffer buffer ):通过StringBuffer数组构造字符串对象。例如: byte[] b = {'a','b','c','d','e','f','g','h','i','j'}; char[] c = {'0','1','2','3','4','5','6','7','8','9'}; String sb = new String(b); //abcdefghij String sb_sub = new String(b,3/*offset*/,2/*length*/); //de String sc = new String(c); //0123456789 String sc_sub = new String(c,3,2); //34 String sb_copy = new String(sb); //abcdefghij System.out.println("sb:"+sb); System.out.println("sb_sub:"+sb_sub); System.out.println("sc:"+sc); System.out.println("sc_sub:"+sc_sub); System.out.println("sb_copy:"+sb_copy); 输出结果:sb:abcdefghij sb_sub:de sc:0123456789 sc_sub:34 sb_copy:abcdefghij 二、方法: 说明:①、所有方法均为public。 ②、书写格式: [修饰符] <返回类型><方法名([参数列表])> 0.public static int parseInt(String s) public static byte parseByte(String s) public static boolean parseBoolean(String s) public static short parseShort(String s) public static long parseLong(String s) public static double parseDouble(String s) 例如:可以将“数字”格式的字符串,转化为相应的基本数据类型 int i=Integer.pareInt(“123”)

最小二乘法综述及举例

最小二乘法综述及算例 一最小二乘法的历史简介 1801年,意大利天文学家朱赛普·皮亚齐发现了第一颗小行星谷神星。经过40天的跟踪观测后,由于谷神星运行至太阳背后,使得皮亚齐失去了谷神星的位置。随后全世界的科学家利用皮亚齐的观测数据开始寻找谷神星,但是根据大多数人计算的结果来寻找谷神星都没有结果。时年24岁的高斯也计算了谷神星的轨道。奥地利天文学家海因里希·奥尔伯斯根据高斯计算出来的轨道重新发现了谷神星。 高斯使用的最小二乘法的方法发表于1809年他的著作《天体运动论》中。 经过两百余年后,最小二乘法已广泛应用与科学实验和工程技术中,随着现代电子计算机的普及与发展,这个方法更加显示出其强大的生命力。 二最小二乘法原理 最小二乘法的基本原理是:成对等精度测得的一组数据),...,2,1(,n i y x i i =,是找出一条最佳的拟合曲线,似的这条曲线上的个点的值与测量值的差的平方和在所有拟合曲线中最小。 设物理量y 与1个变量l x x x ,...,2,1间的依赖关系式为:)(,...,1,0;,...,2,1n l a a a x x x f y =。 其中n a a a ,...,1,0是n +l 个待定参数,记()2 1 ∑=- = m i i i y v s 其中 是测量值, 是由己求 得的n a a a ,...,1,0以及实验点),...,2,1)(,...,(;,2,1m i v x x x i il i i =得出的函数值 )(,...,1,0;,...,2,1n il i i a a a x x x f y =。 在设计实验时, 为了减小误差, 常进行多点测量, 使方程式个数大于待定参数的个数, 此时构成的方程组称为矛盾方程组。通过最小二乘法转化后的方程组称为正规方程组(此时方程式的个数与待定参数的个数相等) 。我们可以通过正规方程组求出a 最小二乘法又称曲线拟合, 所谓“ 拟合” 即不要求所作的曲线完全通过所有的数据点, 只要求所得的曲线能反映数据的基本趋势。 三曲线拟合 曲线拟合的几何解释: 求一条曲线, 使数据点均在离此曲线的上方或下方不远处。 (1)一元线性拟合 设变量y 与x 成线性关系x a a y 10+=,先已知m 个实验点),...,2,1(,m i v x i i =,求两个未知参数1,0a a 。 令()2 1 10∑ =--=m i i i x a a y s ,则1,0a a 应满足1,0,0==??i a s i 。 即 i v i v

《操作系统原理》5资源管理(死锁)习题

第五章死锁练习题 (一)单项选择题 1.系统出现死锁的根本原因是( )。 A.作业调度不当B.系统中进程太多C.资源的独占性D.资源管理和进程推进顺序都不得当 2.死锁的防止是根据( )采取措施实现的。 A.配置足够的系统资源B.使进程的推进顺序合理 C.破坏产生死锁的四个必要条件之一D.防止系统进入不安全状态 3.采用按序分配资源的策略可以防止死锁.这是利用了使( )条件不成立。 A.互斥使用资源B循环等待资源C.不可抢夺资源D.占有并等待资源 4.可抢夺的资源分配策略可预防死锁,但它只适用于( )。 A.打印机B.磁带机C.绘图仪D.主存空间和处理器 5.进程调度算法中的( )属于抢夺式的分配处理器的策略。 A.时间片轮转算法B.非抢占式优先数算法C.先来先服务算法D.分级调度算法 6.用银行家算法避免死锁时,检测到( )时才分配资源。 A.进程首次申请资源时对资源的最大需求量超过系统现存的资源量 B.进程己占用的资源数与本次申请资源数之和超过对资源的最大需求量 C.进程已占用的资源数与本次申请的资源数之和不超过对资源的最大需求量,且现存资源能满足尚需的最大资源量 D进程已占用的资源数与本次申请的资源数之和不超过对资源的最大需求量,且现存资源能满足本次申请量,但不能满足尚需的最大资源量 7.实际的操作系统要兼顾资源的使用效率和安全可靠,对资源的分配策略,往往采用( )策略。 A死锁的防止B.死锁的避免C.死锁的检测D.死锁的防止、避免和检测的混合 (二)填空题 1.若系统中存在一种进程,它们中的每一个进程都占有了某种资源而又都在等待其中另一个进程所占用的资源。这种等待永远不能结束,则说明出现了______。 2.如果操作系统对______或没有顾及进程______可能出现的情况,则就可能形成死锁。 3.系统出现死锁的四个必要条件是:互斥使用资源,______,不可抢夺资源和______。 4.如果进程申请一个某类资源时,可以把该类资源中的任意一个空闲资源分配给进程,则说该类资源中的所有资源是______。 5.如果资源分配图中无环路,则系统中______发生。 6.为了防止死锁的发生,只要采用分配策略使四个必要条件中的______。 7.使占有并等待资源的条件不成立而防止死锁常用两种方法:______和______. 8静态分配资源也称______,要求每—个进程在______就申请它需要的全部资源。 9.释放已占资源的分配策略是仅当进程______时才允许它去申请资源。 10.抢夺式分配资源约定,如果一个进程已经占有了某些资源又要申请新资源,而新资源不能满足必须等待时、系统可以______该进程已占有的资源。 11.目前抢夺式的分配策略只适用于______和______。 12.对资源采用______的策略可以使循环等待资源的条件不成立。 13.如果操作系统能保证所有的进程在有限的时间内得到需要的全部资源,则称系统处于______。14.只要能保持系统处于安全状态就可______的发生。 15.______是一种古典的安全状态测试方法。 16.要实现______,只要当进程提出资源申请时,系统动态测试资源分配情况,仅当能确保系统安全时才把资源分配给进程。

C++ string 详解

C++ string 详解 之所以抛弃char*的字符串而选用C++标准程序库中的string类,是因为他和前者比较起来,不必担心内存是否足够、字符串长度等等,而且作为一个类出现,他集成的操作函数足以完成我们大多数情况下(甚至是100%)的需要。我们可以用= 进行赋值操作,== 进行比较,+ 做串联(是不是很简单?)。我们尽可以把它看成是C++的基本数据类型。 好了,进入正题……… 首先,为了在我们的程序中使用string类型,我们必须包含头文件。如下: #include //注意这里不是string.h string.h是C字符串头文件 1.声明一个C++字符串 声明一个字符串变量很简单: string Str; 这样我们就声明了一个字符串变量,但既然是一个类,就有构造函数和析构函数。上面的声明没有传入参数,所以就直接使用了string的默认的构造函数,这个函数所作的就是把Str 初始化为一个空字符串。String类的构造函数和析构函数如下: a) string s; //生成一个空字符串s b) string s(str) //拷贝构造函数生成str的复制品 c) string s(str,stridx) //将字符串str内“始于位置stridx”的部分当作字符串的初值 d) string s(str,stridx,strlen) //将字符串str内“始于stridx且长度顶多strlen”的部分作为字符串的初值 e) string s(cstr) //将C字符串作为s的初值 f) string s(chars,chars_len) //将C字符串前chars_len个字符作为字符串s的初值。 g) string s(num,c) //生成一个字符串,包含num个c字符 h) string s(beg,end) //以区间beg;end(不包含end)内的字符作为字符串s的初值 i) s.~string() //销毁所有字符,释放内存 都很简单,我就不解释了。 2.字符串操作函数 这里是C++字符串的重点,我先把各种操作函数罗列出来,不喜欢把所有函数都看完的人可以在这里找自己喜欢的函数,再到后面看他的详细解释。

Application对象基本操作应用示例

Application对象基本操作应用示例 Application对象代表整个Microsoft Excel应用程序,带有175个属性和52个方法,可以设置整个应用程序的环境或配置应用程序。 示例01-01:体验开/关屏幕更新(ScreenUpdating属性) Sub 关闭屏幕更新() MsgBox "顺序切换工作表Sheet1→Sheet2→Sheet3→Sheet2,先开启屏幕更新,然后关闭屏幕更新" Worksheets(1).Select MsgBox "目前屏幕中显示工作表Sheet1" Application.ScreenUpdating = True Worksheets(2).Select MsgBox "显示Sheet2了吗?" Worksheets(3).Select MsgBox "显示Sheet3了吗?" Worksheets(2).Select MsgBox "下面与前面执行的程序代码相同,但关闭屏幕更新功能" Worksheets(1).Select MsgBox "目前屏幕中显示工作表Sheet1" & Chr(10) & "关屏屏幕更新功能" Application.ScreenUpdating = False Worksheets(2).Select MsgBox "显示Sheet2了吗?" Worksheets(3).Select MsgBox "显示Sheet3了吗?" Worksheets(2).Select Application.ScreenUpdating = True End Sub 示例说明:ScreenUpdating属性用来控制屏幕更新。当运行一个宏程序处理涉及到多个工作表或单元格中的大量数据时,若没有关闭屏幕更新,则会占用CPU的处理时间,从而降低程序的运行速度,而关闭该属性则可显著提高程序运行速度。 示例01-02:使用状态栏(StatusBar属性) Sub testStatusBar() Application.DisplayStatusBar = True '开启状态栏显示 '赋值状态栏显示的文本 Application.StatusBar = "https://www.doczj.com/doc/7712082754.html," End Sub 示例说明:StatusBar属性用来指定显示在状态栏上的信息。若不想再显示状态栏文本,可使用Applicat ion.StatusBar = False语句关闭状态栏显示,也可以在程序开始将原先的状态栏设置存储,如使用语句o ldStatusBar = Application.DisplayStatusBar将状态栏原来的信息存储在变量oldStatusBar,在程序运行完成或退出时,将变量重新赋值给状态栏,如使用语句Application.DisplayStatusBar = oldStatusBa r,以恢复状态栏原状。

操作系统死锁练习及答案

死锁练习题 (一)单项选择题 l系统出现死锁的根本原因是( )。 A.作业调度不当 B.系统中进程太多 C.资源的独占性 D.资源管理和进程推进顺序都不得当 2.死锁的防止是根据( )采取措施实现的。 A.配置足够的系统资源 B.使进程的推进顺序合理 C.破坏产生死锁的四个必要条件之一 D.防止系统进入不安全状态 3.采用按序分配资源的策略可以防止死锁.这是利用了使( )条件不成立。 A.互斥使用资源 B循环等待资源 c.不可抢夺资源 D.占有并等待资源 4.可抢夺的资源分配策略可预防死锁,但它只适用于( )。A.打印机 B.磁带机 c.绘图仪 D.主存空间和处理器 5.进程调度算法中的( )属于抢夺式的分配处理器的策略。A.时间片轮转算法 B.非抢占式优先数算法 c.先来先服务算法 D.分级调度算法 6.用银行家算法避免死锁时,检测到( )时才分配资源。 A.进程首次申请资源时对资源的最大需求量超过系统现存的资源量 B.进程己占用的资源数与本次申请资源数之和超过对资源的最大需求量 c.进程已占用的资源数与本次申请的资源数之和不超过对资源的最大需求量,且现存资源能满足尚需的最大资源量 D进程已占用的资源数与本次申请的资源数之和不超过对资源的最大需求量,且现存资源能满足本次申请量,但不能满足尚需的最大资源量 7.实际的操作系统要兼顾资源的使用效率和安全可靠,对资源的分配策略,往往采用 ( )策略。 A死锁的防止 B.死锁的避免 c.死锁的检测 D.死锁的防止、避免和检测的混合(一)单项选择题 1.D 2.C 3.B 4.D 5.A 6 C 7 D (二)填空题 l若系统中存在一种进程,它们中的每一个进程都占有了某种资源而又都在等待其中另一个进程所占用的资源。这种等待永远不能结束,则说明出现了______。 2.如果操作系统对 ______或没有顾及进程______可能出现的情况,则就可能形成死锁。3.系统出现死锁的四个必要条件是:互斥使用资源,______,不可抢夺资源和______。 4.如果进程申请一个某类资源时,可以把该类资源中的任意一个空闲资源分配给进程,则说该类资源中的所有资源是______。 5.如果资源分配图中无环路,则系统中______发生。 6.为了防止死锁的发生,只要采用分配策略使四个必要条件中的______。 7.使占有并等待资源的条件不成立而防止死锁常用两种方法:______和______. 8静态分配资源也称______,要求每—个进程在______就申请它需要的全部资源。 9.释放已占资源的分配策略是仅当进程______时才允许它去申请资源。 10抢夺式分配资源约定,如果一个进程已经占有了某些资源又要申请新资源,而新资源不能满足必须等待时、系统可以______该进程已占有的资源。 11.目前抢夺式的分配策略只适用于______和______。 12.对资源采用______的策略可以使循环等待资源的条件不成立。 13.如果操作系统能保证所有的进程在有限的时间内得到需要的全部资源,则称系统处于______。 14.只要能保持系统处于安全状态就可______的发生。 15.______是一种古典的安全状态测试方法。 16.要实现______,只要当进程提出资源申请时,系统动态测试资源分配情况,仅当能确保系统安全时才把资源分配给进程。 17.可以证明,M个同类资源被n个进程共享时,只要不等式______成立,则系统一定不会发生死锁,其中x为每个进程申请该类资源的最大量。 18.______对资源的分配不加限制,只要有剩余的资源,就可把资源分配给申请者。 19.死锁检测方法要解决两个问题,一是______是否出现了死锁,二是当有死锁发生时怎样去______。 20.对每个资源类中只有一个资源的死锁检测程序根据______和______两张表中记录的资源情况,把进程等待资源的关系在矩阵中表示出

C++ string

C++ string介绍 char*的字符串和C++标准程序库中的string类相比,后者不必担心内存是否足够、字符串长度等等,而且作为一个类出现,他集成的操作函数足以完成我们大多数情况下(甚至是100%)的需要。我们可以用 = 进行赋值操作,== 进行比较,+ 做串联(是不是很简单?)。我们尽可以把它看成是C++的基本数据类型。 好了,进入正题………- 首先,为了在我们的程序中使用string类型,我们必须包含头文件。如下: #include //注意这里不是string.h string.h是C字符串头文件 1.声明一个C++字符串 声明一个字符串变量很简单: string Str; 这样我们就声明了一个字符串变量,但既然是一个类,就有构造函数和析构函数。上面的声明没有传入参数,所以就直接使用了string的默认的构造函数,这个函数所作的就是把Str初始化为一个空字符串。String类的构造函数和析构函数如下: a) string s; //生成一个空字符串s b) string s(str) //拷贝构造函数生成str的复制品 c) string s(str,stridx) //将字符串str内“始于位置stridx”的部分当作字符串的初值 d) string s(str,stridx,strlen) //将字符串str内“始于stridx且长度顶多strlen”的部分作为字符串的初值 e) string s(cstr) //将C字符串作为s的初值 f) string s(chars,chars_len) //将C字符串前chars_len个字符作为字符串s的初值。 g) string s(num,c) //生成一个字符串,包含num个c字符 h) string s(beg,end) //以区间beg;end(不包含end)内的字符作为字符串s的初值 i) s.~string() //销毁所有字符,释放内存 都很简单,我就不解释了。 2.字符串操作函数 这里是C++字符串的重点,我先把各种操作函数罗列出来,不喜欢把所有函数都看完的人可以在这里找自己喜欢的函数,再到后面看他的详细解释。 a) =,assign() //赋以新值

最小二乘法在误差分析中的应用

误差理论综述与最小二乘法讨论 摘要:本文对误差理论和有关数据处理的方法进行综述。并且针对最小二乘法(LS)的创立、发展、思想方法等相关方面进行了研究和总结。同时,将近年发展起来的全面最小二乘法(TLS)同传统最小二乘法进行了对比。 1.误差的有关概念 对科学而言,各种物理量都需要经过测量才能得出结果。许多物理量的发现,物理常数的确定,都是通过精密测量得到的。任何测试结果,都含有误差,因此,必须研究,估计和判断测量结果是否可靠,给出正确评定。对测量结果的分析、研究、判断,必须采用误差理论,它是我们客观分析的有力工具 测量基本概念 一个物理量的测量值应由数值和单位两部分组成。按实验数据处理的方式,测量可分为直接测量、间接测量和组合测量。 直接测量:可以用测量仪表直接读出测量值的测量。 间接测量:有些物理量无法直接测得,需要依据待测物理量与若干直接测量量的函数关系求出。 组合测量:如有若干个待求量,把这些待求量用不同方法组合起来进行测量,并把测量结果与待求量之间的函数关系列成方程组,用最小二乘法求出这个待求量的数值,即为组合测量。 误差基本概念 误差是评定测量精度的尺度,误差越小表示精度越高。若某物理量的测量值为y,真值为Y,则测量误差dy=y-Y。虽然真值是客观存在的,但实际应用时它一般无从得知。按照误差的性质,可分为随机误差,系统误差和粗大误差三类。 随机误差:是同一测量条件下,重复测量中以不可预知方式变化的测量误差分量。 系统误差:是同一测量条件下,重复测量中保持恒定或以可预知方式变化的测量误差分量。 粗大误差:指超出在规定条件下预期的误差。 等精度测量的随机误差 当对同一量值进行多次等精度的重复测量,得到一系列的测量值,每个测量

死锁问题的相关研究

死锁问题的相关研究 摘要死锁是计算机操作系统学习中的一个重点,进程在使用系统资源时易产生死锁问题,若何排除、预防和避免死锁,是我们所要研究的重要问题。 关键词银行家算法;存储转发;重装死锁 所谓死锁是指两个或两个以上的进程在执行过程中,因争夺资源而造成的一种互相等待的现象,若无外力作用,它们都将无法推进下去.此时称系统处于死锁状态或系统产生了死锁,这些永远在互相等待的进程称为死锁进程。 1产生死锁的原因及其必要条件 1)产生死锁的原因。因为系统资源不足;进程运行推进的顺序不合适;资源分配不当等。如果系统资源充足,进程的资源请求都能够得到满足,死锁出现的可能性就很低,否则就会因争夺有限的资源而陷入死锁。其次,进程运行推进顺序与速度不同,也可能产生死锁。 2)产生死锁的四个必要条件。互斥条件:一个资源每次只能被一个进程使用。请求与保持条件(占有等待):一个进程因请求资源而阻塞时,对已获得的资源保持不放。不剥夺条件(不可抢占):进程已获得的资源,在未使用完之前,不能强行剥夺。循环等待条件:若干进程之间形成一种头尾相接的循环等待资源关系。 这四个条件是死锁的必要条件,只要系统发生死锁,这些条件必然成立,而只要上述条件之一不满足,就不会发生死锁。 2死锁的解除与预防 理解了死锁的原因,尤其是产生死锁的四个必要条件,就可以最大可能地避免、预防和解除死锁。在系统设计、进程调度等方面注意如何不让这四个必要条件成立,如何确定资源的合理分配算法,避免进程永久占据系统资源。 1)有序资源分配法。这种算法资源按某种规则系统中的所有资源统一编号(例如打印机为1、磁带机为2、磁盘为3、等等),申请时必须以上升的次序。 采用有序资源分配法:R1的编号为1,R2的编号为2;PA:申请次序应是:R1,R2;PB:申请次序应是:R1,R2;这样就破坏了环路条件,避免了死锁的发生。 2)银行算法。避免死锁算法中最有代表性的算法是DijkstraE.W于1968年提出的银行家算法。该算法需要检查申请者对资源的最大需求量,如果系统现存的各类资源可以满足申请者的请求,就满足申请者的请求。这样申请者就可很快

相关主题
文本预览
相关文档 最新文档