当前位置:文档之家› Content Based Image Retrieval using Color and Texture

Content Based Image Retrieval using Color and Texture

Content Based Image Retrieval using Color and Texture
Content Based Image Retrieval using Color and Texture

Content Based Image Retrieval using

Color and Texture

Manimala Singha* and K.Hemachandran$

Dept. of Computer Science, Assam University, Silchar

India. Pin code 788011

*n.manimala888@https://www.doczj.com/doc/963012440.html,,$khchandran@https://www.doczj.com/doc/963012440.html,

A BSTRACT

The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images. The performance has been evaluated by comparing with the existing systems in the literature.

Keywords

Image Retrieval, Color Histogram, Color Spaces, Quantization, Similarity Matching, Haar Wavelet, Precision and Recall.

1.INTRODUCTION

Research on content-based image retrieval has gained tremendous momentum during the last decade. A lot of research work has been carried out on Image Retrieval by many researchers, expanding in both depth and breadth [1]-[5]. The term Content Based Image Retrieval (CBIR) seems to have originated with the work of Kato [6] for the automatic retrieval of the images from a database, based on the color and shape present. Since then, the term has widely been used to describe the process of retrieving desired images from a large collection of database, on the basis of syntactical image features (color, texture and shape). The techniques, tools and algorithms that are used, originate from the fields, such as statistics, pattern recognition, signal processing, data mining and computer vision. In the past decade, many image retrieval systems have been successfully developed, such as the IBM QBIC System [7], developed at the IBM Almaden Research Center, the VIRAGE System [8], developed by the Virage Incorporation, the Photobook System [9], developed by the MIT Media Lab, the VisualSeek System [10], developed at Columbia University, the WBIIS System [11] developed at Stanford University, and the Blobworld System [12], developed at U.C. Berkeley and SIMPLIcity System [13]. Since simply color, texture and shape features cannot sufficiently represent image semantics, semantic-based image retrieval is still an open problem. CBIR is the most important and effective image retrieval method and widely studied in both academia and industry arena. In this paper we propose an image retrieval system, called Wavelet-Based Color Histogram Image Retrieval (WBCHIR),

based on the combination of color and texture features.The color histogram for color feature and wavelet representation for texture and location information of an image. This reduces the processing time for retrieval of an image with more promising representatives. The extraction of color features from digital images depends on an understanding of the theory of color and the representation of color in digital images. Color spaces are an important component for relating color to its representation in digital form. The transformations between different color spaces and the quantization of color information are primary determinants of a given feature extraction method. Color is usually represented by color histogram, color correlogram, color coherence vector and color moment, under certain a color space [14-17]. The color histogram feature has been used by many researchers for image retrieval [18 and 19]. A color histogram is a vector, where each element represents the number of pixels falling in a bin, of an image [20]. The color histogram has been used as one of the feature extraction attributes with the advantage like robustness with respect to geometric changes of the objects in the image. However the color histogram may fail when the texture feature is dominant in an image [21].Li and Lee [22] have proposed a ring based fuzzy histogram feature to overcome the limitation of conventional color histogram.The distance formula used by many researchers, for image retrieval, include Histogram Euclidean Distance, Histogram Intersection Distance, Histogram Manhattan Distance and Histogram Quadratic Distance[23-27].

Texture is also considered as one of the feature extraction attributes by many researchers [28-31]. Although there is no formal definition for texture, intuitively this descriptor provides measures of the properties such as smoothness, coarseness, and regularity. Mainly the texture features of an image are analyzed through statistical, structural and spectral methods [32].

The rest of the paper is organized as follows: In section 2, a brief review of the related work is presented. The section 3 describes the color feature extraction. The section 4, presents the texture feature extraction and the section 5, presents the similarity matching. The proposed method is given in section 6 and section 7 describes the performance evaluation of the proposed method. Finally the experimental work and the conclusions are presented in section 8 and section 9 respectively.

2.RELATED WORK

Lin et al. [14] proposed a color-texture and color-histogram based image retrieval system (CTCHIR). They proposed (1) three image features, based on color, texture and color distribution, as color co-occurrence matrix (CCM), difference between pixels of scan pattern (DBPSP) and color histogram for K-mean (CHKM) respectively and (2) a method for image retrieval by integrating CCM, DBPSP and CHKM to enhance image detection rate and simplify computation of image retrieval. From the experimental results they found that, their proposed method outperforms the Jhanwar et al. [33] and Hung and Dai [34] methods. Raghupathi et al.

[35] have made a comparative study on image retrieval techniques, using different feature extraction methods like color histogram, Gabor Transform, color histogram+gabour transform, Contourlet Transform and color histogram+contourlet transform. Hiremath and Pujari [36] proposed CBIR system based on the color, texture and shape features by partitioning the image into tiles. The features computed on tiles serve as local descriptors of color and texture features. The color and texture analysis are analyzed by using two level grid frameworks and the shape feature is used by using Gradient Vector Flow. The comparison of experimental result of proposed method with other system [37]-[40] found that, their proposed retrieval system gives better performance than the others. Rao et al. [41] proposed CTDCIRS (color-texture and dominant color based image retrieval system), they integrated three features like Motif co-occurrence matrix (MCM) and difference between pixels of scan pattern (DBPSP) which describes the texture features and dynamic dominant color (DDC) to extract color feature. They

compared their results with the work of Jhanwar et al. [33] and Hung and Dai [34] and found that their method gives better retrieval results than others.

3. COLOR FEATURE

The color feature has widely been used in CBIR systems, because of its easy and fast computation [42]-[43]. Color is also an intuitive feature and plays an important role in image matching. The extraction of color features from digital images depends on an understanding of the theory of color and the representation of color in digital images. The color histogram is one of the most commonly used color feature representation in image retrieval. The original idea to use histogram for retrieval comes from Swain and Ballard [27], who realized the power to identify an object using color is much larger than that of a gray scale.

3.1 COLOR SPACE SELECTION AND COLOR QUANTIZATION

The color of an image is represented, through any of the popular color spaces like RGB, XYZ, YIQ, L*a*b*, U*V*W*, YUV and HSV [44]. It has been reported that the HSV color space gives the best color histogram feature, among the different color spaces [45]-[49]. The application of the HSV color space in the content-based image retrieval has been reported by Surel et al. [50]. In HSV color space the color is presented in terms of three components: Hue (H), Saturation (S) and Value (V) and the HSV color space is based on cylinder coordinates [51 and 52].

Color quantization is a process that optimizes the use of distinct colors in an image without affecting the visual properties of an image. For a true color image, the distinct number of colors is up to 224 = 16777216 and the direct extraction of color feature from the true color will lead to a large computation. In order to reduce the computation, the color quantization can be used to represent the image, without a significant reduction in image quality, thereby reducing the storage space and enhancing the process speed [53]. The effect of color quantization on the performance of image retrieval has been reported by many authors [53 and 54].

3.2 Color Histogram

A color histogram represents the distribution of colors in an image, through a set of bins, where each histogram bin corresponds to a color in the quantized color space. A color histogram for a given image is represented by a vector:

H = ?H ?0?,H ?1?,H ?2?,H ?3?,…………H ?i ?,………,H ?n ?? Where i is the color bin in the color histogram and H[i] represents the number of pixels of color i in the image, and n is the total number of bins used in color histogram. Typically, each pixel in an image will be assigned to a bin of a color histogram. Accordingly in the color histogram of an image, the value of each bin gives the number of pixels that has the same corresponding color. In order to compare images of different sizes, color histograms should be normalized. The normalized color histogram H ? is given as:

H ?=?H ? ?0?,H ??1?,H ??2?,……,H ??i ?,….,H ??n ?? Where H ??i ?=

?????, p is the total number of pixels of an image [55].

4. TEXTURE FEATURE

Like color, the texture is a powerful low-level feature for image search and retrieval applications. Much work has been done on texture analysis, classification, and segmentation for the last four decade, still there is a lot of potential for the research. So far, there is no unique definition for

texture; however, an encapsulating scientific definition as given in [56] can be stated as, “Texture is an attribute representing the spatial arrangement of the grey levels of the pixels in a region or image”. The common known texture descriptors are Wavelet Transform [57], Gabor-filter [58], co-occurrence matrices [59] and Tamura features [60]. We have used Wavelet Transform, which decomposes an image into orthogonal components, because of its better localization and computationally inexpensive properties [30 and 31].

4.1 Haar Discrete Wavelet Transforms

Discrete wavelet transformation (DWT) [61] is used to transform an image from spatial domain into frequency domain. The wavelet transform represents a function as a superposition of a family of basis functions called wavelets. Wavelet transforms extract information from signal at different scales by passing the signal through low pass and high pass filters. Wavelets provide multi-resolution capability and good energy compaction. Wavelets are robust with respect to color intensity shifts and can capture both texture and shape information efficiently. The wavelet transforms can be computed linearly with time and thus allowing for very fast algorithms [28]. DWT decomposes a signal into a set of Basis Functions and Wavelet Functions. The wavelet transform computation of a two-dimensional image is also a multi-resolution approach, which applies recursive filtering and sub-sampling. At each level (scale), the image is decomposed into four frequency sub-bands, LL, LH, HL, and HH where L denotes low frequency and H denotes high frequency as shown in Figure1.

Figure 1. Discrete Wavelet Sub-band Decomposition

Haar wavelets are widely being used since its invention after by Haar [62]. Haar used these functions to give an example of a countable orthonormal system for the space of square-integrable functions on the real line. In this paper, we have used Haar wavelets to compute feature signatures, because they are the fastest to compute and also have been found to perform well in practice [63]. Haar wavelets enable us to speed up the wavelet computation phase for thousands of sliding windows of varying sizes in an image. The Haar wavelet's mother wavelet function ψ(t) can be described as:

ψ?t ?= ?????1 ,0≤t ≤12?1 ,12≤t <10 ,otherwise

?1? and its scaling function φ

(t) can be described as:

Signal & Image Processing : An International Journal (SIPIJ) Vol.3, No.1, February 2012

φ?t?= ?1 ,0≤t <10 ,otherwise

?2? 5. FEATURE SIMILARITY MATCHING

The Similarity matching is the process of approximating a solution, based on the computation of a similarity function between a pair of images, and the result is a set of likely values. Exactness, however, is a precise concept. Many researchers have used different similarity matching techniques [23]-[27]. In our study, we have used the Histogram Intersection Distance method, for the reasons given in [64].

5.1 Histogram Intersection Distance:

Swain and Ballard [27] proposed histogram intersection for color image retrieval. Intersection of histograms was originally defined as:

d ??=∑min?Q ?i ?, D ?i ?????????| D ?i ?|?

?3? Smith and Chang [55] extended the idea, by modifying the denominator of the original definition, to include the case when the cardinalities of the two histograms are different and expressed as: d ??∑min?Q?i?,D?i????????min ?|Q?i?|,D?i??

?4? and |Q| and |D| represents the magnitude of the histogram for query image and a representative image in the Database.

6. PROPOSED METHOD

In this study we are proposing two algorithms for image retrieval based on the color histogram and Wavelet-based Color Histogram. The block diagrams of the proposed methods are shown in Figure 2. and Figure 3.

6.1. Color Histogram

Step 1. Convert RGB color space image into HSV color space.

Step 2. Color quantization is carried out using color histogram by assigning 8 level each to

hue, saturation and value to give a quantized HSV space with 8x8x8=512 histogram bins.

Step 3. The normalized histogram is obtained by dividing with the total number of pixels. Step 4. Repeat step1 to step3 on an image in the database.

Step 5. Calculate the similarity matrix of query image and the image present in the database. Step 6. Repeat the steps from 4 to 5 for all the images in the database.

Step 7. Retrieve the images.

Figure 2. Block diagram of proposed Color Histogram

6.2. Wavelet-Based Color Histogram (WBCH).

Step1. Extract the Red, Green, and Blue Components from an image.

Step2. Decompose each Red, Green, Blue Component using Haar Wavelet transformation at 1st level to get approximate coefficient and vertical, horizontal and diagonal detail coefficients.

Step3. Combine approximate coefficient of Red, Green, and Blue Component.

Step4. Similarly combine the horizontal and vertical coefficients of Red, Green, and Blue Component.

Step5. Assign the weights 0.003 to approximate coefficients, 0.001 to horizontal and 0.001 to vertical coefficients (experimentally observed values).

Step6. Convert the approximate, horizontal and vertical coefficients into HSV plane.

Step7. Color quantization is carried out using color histogram by assigning 8 level each to hue, saturation and value to give a quantized HSV space with 8x8x8=512 histogram bins. Step8. The normalized histogram is obtained by dividing with the total number of pixels.

Step9. Repeat step1 to step8 on an image in the database.

Step10. Calculate the similarity matrix of query image and the image present in the database. Step11. Repeat the steps from 9 to 10 for all the images in the database.

Step12. Retrieve the images.

Figure 3. Block diagram of proposed Wavelet-Based Color Histogram (WBCH). (A-approximate coefficient, H-horizontal detail coefficient, V-vertical detail coefficient).

7.PERFORMANCE EVALUATION

The performance of retrieval of the system can be measured in terms of its recall and precision. Recall measures the ability of the system to retrieve all the models that are relevant, while precision measures the ability of the system to retrieve only the models that are relevant. It has been reported that the histogram gives the best performance through recall and precision value [35, 44]. They are defined as:

Precision=?????? ?? ???????? ?????? ?????????

????? ?????? ?? ?????? ?????????=???? (5)

R ecall = ?????? ?? ???????? ?????? ?????????

????? ?????? ?? ???????? ??????

=???? (6)

Where A represent the number of relevant images that are retrieved, B, the number of irrelevant items and the C, number of relevant items those were not retrieved. The number of relevant items retrieved is the number of the returned images that are similar to the query image in this case. The total number of items retrieved is the number of images that are returned by the search engine. The average precision for the images that belongs to the q th category (A q) has been computed by [65]

??=????????

??

?∈??

?7?

Where q=1, 2……10.

Finally, the average precision is given by:

??=∑???/10.

????? ?8?

8.EXPERIMENT

The proposed method has been implemented using Matlab 7.3 and tested on a general-purpose WANG database [66] containing 1,000 images of the Corel stock photo, in JPEG format of size 384x256 and 256x386 as shown in Figure 4. The search is usually based on similarity rather than the exact match. We have followed the image retrieval technique, as described in the section 6.1 on different quantization schemes. The quality of the image retrieval, with different quantization schemes like (4, 4, 4), (4, 8, 8), (8, 4, 4), (8, 8, 4), (8, 8, 8), (16, 4, 4) and (18, 3, 3) has been evaluated by randomly selecting 10 query images, of each category, from the image database. Each query returns the top 10 images from database, and the calculated precision values, using the equation 5, and average precision using equation 8 are given in the Table 1. The average precision (7.8) value of (8, 8, 8) quantization bin indicates the better retrieval results than the others.

Table 1. Precision Using Different Quantization Schemes

Category 4,4,4 4,4,8 4,8,8 8,4,4 8,8,4 8,8,8 16,4,4 18,3,3

African

9 9 9 9 9 9 10 9

People

Beach 6 5 5 4 6 6 3 5

Building 6 6 6 6 7 6 8 9

Buses 9 9 9 9 9 9 9 8

Dinosaurs 8 10 9 7 8 9 8 8

Elephants 7 8 8 7 8 9 7 7

Flowers 6 6 6 7 7 7 7 6

Horses 9 9 9 10 9 9 10 10

Mountains 6 6 6 5 5 6 6 5

Food 8 7 9 8 8 8 8 8

Average

7.4 7.5 7.6 7.2 7.6 7.8 7.6 7.5

precision

The WBCH method, as discussed in section 6.2, has been used to study the image retrieval using (8,8,8) color quantization bin and the performance of the proposed image retrieval technique has been evaluated by comparing the results with the results of different authors [14, 35, 36 and 41] as shown in the Table 2. The effectiveness of the WBCH retrieval method is evaluated by selecting 10 query images under each category of different semantics. For each query, we examined the precision of the retrieval, based on the relevance of the image semantics. The semantic relevance is determined by manual truthing the query image and each of the retrieved images in the retrieval. The precision values, calculated by using the equation 5 and also the average precision using the equation 8 are shown in Table 2. The 10 query retrievals by the proposed method are shown in Figures 5-14, with an average retrieval time as 1min. These results clearly show that the performance of the proposed method is better than the other methods.

Table 2: Precision of the Retrieval by different methods

Classes Category WBCH CH [14] [35] [36] [41]

1 African people 0.65 0.7

2 0.68 0.75 0.54 0.562

2 Beach 0.62 0.5

3 0.5

4 0.6 0.38 0.536

3 Building 0.71 0.61 0.56 0.43 0.30 0.61

4 Buses 0.92 0.93 0.89 0.69 0.64 0.893

5 Dinosaurs 0.97 0.95 0.99 1 0.9

6 0.984

6 Elephants 0.86 0.84 0.66 0.72 0.62 0.578

7 Flowers 0.76 0.66 0.89 0.93 0.68 0.899

8 Horses 0.87 0.89 0.8 0.91 0.75 0.78

9 Mountains 0.49 0.47 0.52 0.36 0.45 0.512

10 Food 0.77 0.82 0.73 0.65 0.53 0.694

Average Precision0.762 0.742 0.726 0.704 0.585 0.7048

Figure 4. Sample of WANG Image Database

Figure 5. Retrieve results for African People

Figure 6. Retrieve results for Beach

Figure 7. Retrieve results for Building

Figure 8. Retrieve results for Bus

Figure 9. Retrieve results for Dinosaurs

Figure 10. Retrieve results for Elephants

Figure 11. Retrieve results for Flowers

Figure 12. Retrieve results for Horses

Figure 13. Retrieve results for Mountains

Figure 14. Retrieve results for Food

9.CONCLUSION

In this paper, we presented a novel approach for Content Based Image Retrieval by combining the color and texture features called Wavelet-Based Color Histogram Image Retrieval (WBCHIR). Similarity between the images is ascertained by means of a distance function. The experimental result shows that the proposed method outperforms the other retrieval methods in terms of Average Precision. Moreover, the computational steps are effectively reduced with the use of Wavelet transformation. As a result, there is a substational increase in the retrieval speed. The whole indexing time for the 1000 image database takes 5-6 minutes.

ACKNOWLEDGEMENTS

One of us (*) is grateful to the Prof. Tapodhir Bhattacharjee (VC), Assam University, Silchar for the award of UGC Fellowship.

REFERENCES

1.R. Datta, D. Joshi, J. Li and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new

age”, ACM computing Survey, vol.40, no.2, pp.1-60, 2008.

2.J. Eakins and M. Graham, “Content-Based Image Retrieval”, Technical report, JISC Technology

Applications Programme, 1999.

3.Y. Rui, T. S. Huang and S.F. Chang, “Image Retrieval: Current Techniques, Promising Directions and

Open Issues. Journal of Visual Communication and Image Representation. 10(4): pp. 39-62. 1999. 4. A. M. Smeulders, M. Worring and S. Santini, A. Gupta and R. Jain, “Content Based Image Retrieval

at the End of the Early Years”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12): pp. 1349-1380, 2000.

5.Y. Liu, D. Zang, G. Lu and W. Y. Ma, “A survey of content-based image retrieval with high-level

semantics”, Pattern Recognition, Vol-40, pp-262-282, 2007.

6.T. Kato, “Database architecture for content-based image retrieval”, In Proceedings of the SPIE - The

International Society for Optical Engineering, vol.1662, pp.112-113, 1992.

7.M. Flickner, H Sawhney, W. Niblack, J. Ashley, Q. Huang, B. Dom, M. Gorkani, J. Hafne, D. Lee, D.

Petkovic, D. Steele and P. Yanker, “Query by Image and Video Content The QBIC System” IEEE Computer, pp-23-32, 1995.

8. A. Gupta and R. Jain. Visual information retrieval, Communications of the ACM 40 (5), 70–79. 1997.

9. A. Pentland, R.W. Picard and S. Scaroff, “Photobook: Content-Based Manipulation for Image

Databases”, International Journal of Computer Vision 18 (3), pp233–254. 1996.

10.J. R. Smith and S.F. Chang, “VisualSEEk: a fully automated content-based image query system”,

ACM Multimedia, 1996.

11.J. Wang, G. Wiederhold, O. Firschein and S. We, “Content-based Image Indexing and Searching

Using Daubechies’ Wavelets”, International Journal on Digital Libraries (IJODL) 1, (4). pp. 311–328, 1998.

12. C. Carson, S. Belongie, H. Greenspan and J. Malik, “Blobworld: image segmentation using

expectation-maximization and its application to image querying”, IEEE Trans. Pattern Anal. Mach.

Intell. 8 (8), pp. 1026–1038, 2002.

13.J. Wang, J. LI and G. Wiederhold, “SIMPLIcity: Semantics-sensitive integrated matching for picture

libraries”, IEEE Transactions on Pattern Analysis and Machine Intelligence. 23, 9, pp. 947–963, 2001.

14. C.H. Lin, R.T. Chen and Y.K. Chan, “A smart content-based image retrieval system based on color

and texture feature”, Image and Vision Computing vol.27, pp.658–665, 2009.

15.J. Huang and S. K. Ravi, “Image Indexing Using Color Correlograms” , Proceedings of the IEEE

Conference, Computer Vision and Pattern Recognition, Puerto Rico, Jun. 1997.

16.G. Pass and R. Zabih, “Refinement Histogram for Content-Based Image Retrieval”, IEEE Workshop

on Application of Computer Vision, pp. 96-102. 1996.

17.M. Stricker and A. Dimai, “Color indexing with weak spatial constraints”, IS&T/SPIE Conf. on

Storage and Retrieval for Image and Video Databases IV, Vol. 2670, pp.29-40, 1996.

18.P. S. Suhasini, K. R Krishna and I. V. M. Krishna, “CBIR Using Color Histogram Processing”,

Journal of Theoretical and Applied Information Technology, Vol. 6, No.1, pp-116-122, 2009.

19.R. Chakarvarti and X. Meng, “A Study of Color Histogram Based Image Retrieval”, Sixth

International Conference on Information Technology: New Generations, IEEE, 2009.

20.X. Wan and C.C. Kuo, “Color Distrbution Analysis and Quantization for Image Retrieval”, In SPIE

Storage and Retrieval for Image and Video Databases IV, Vol. SPIE 2670, pp. 9–16, 1996.

21.S. Li and M. C. Lee, “Rotation and Scale Invariant Color Image Retrieval Using Fuzzy Clustering”,

Published in Computer Science Journal, Chinese university of Hong Kong, 2004.

22. F. Tang and H. Tae, “Object Tracking with Dynamic Feature Graph”, ICCCN’05. Proceeding of the

14th International Conference on Computer Communications and Networks, 2005.

23.M. Ioka, “A Method of defining the similarity of images on the basis of color information”, Technical

Report IBM Research, Tokyo Research Laboratory, 1989.

24.H. James. H, S. Harpreet, W. Equits, M. Flickner and W. Niblack, “Efficient Color Histogram

Indexing for Quadratic Form Distance Functions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 7, 1995.

25.J.R. Smith and S.F. Chang, “Automated Image Retrieval using Color and Texture”, Technical Report,

Columbia University, 1995.

26.V. V. Kumar, N. G. Rao, A. L. N. Rao and V. V. Krishna, “IHBM: Integrated Histogram Bin

Matching For Similarity Measures of Color Image Retrieval”, International Journal of Signal Processing, Image Processing and Pattern Recognition Vol. 2, No.3, 2009.

27.M. Swain, D. Ballard, “Color indexing”, International Journal of Computer Vision, 7, pp-11–32,

1991.

28. A. Natsev, R. Rastogi and K. Shim, “WALRUS: A Similarity Retrieval Algorithm for Image

Databases”, In Proceeding. ACM SIGMOD Int. Conf. Management of Data, pp-395–406, 1999. 29.S. Ardizzoni, I. Bartolini, and M. Patella, “Windsurf: Region based Image Retrieval using Wavelets”,

In IWOSS’99, pp. 167–173, 1999.

30.G. V. D. Wouwer, P. Scheunders and D. V. Dyck, “Statistical texture characterization from discrete

wavelet representation”, IEEE Transactions on Image Processing, Vol.8, pp-592–598, 1999.

31.S. Livens, P. Scheunders, G. V. D. Wouwer and D. V. Dyck, “Wavelets for texture analysis, an

overview”, Proceedings of Sixth International Conference on Image Processing and Its Applications, Vol. 2, pp-581–585, 1997.

32.R. C. Gonzalez and E.W. Richard, Digital Image Processing, Prentice Hall. 2001.

33.N. Jhanwar, S. Chaudhurib, G. Seetharamanc and B. Zavidovique, “Content based image retrieval

using motif co-occurrence matrix”, Image and Vision Computing, Vol.22, pp-1211–1220, 2004. 34.P.W. Huang and S.K. Dai, “Image retrieval by texture similarity”, Pattern Recognition, Vol. 36, pp-

665–679, 2003.

35.G. Raghupathi, R.S. Anand, and M.L Dewal, “Color and Texture Features for content Based image

retrieval”, Second International conference on multimedia and content based image retrieval, July-21-23, 2010.

36.P. S. Hiremath and J. Pujari, “Content Based Image Retrieval based on Color, Texture and Shape

features using Image and its complement”, 15th International Conference on Advance Computing and Communications. IEEE. 2007.

37.Y. Chen and J. Z. Wang, “A Region-Based Fuzzy Feature Matching Approach to Content Based

Image Retrieval”, IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 24, No.9, pp. 1252-1267, 2002.

38.J. Li, J.Z. Wang and G. Wiederhold, “IRM: Integrated Region Matching for Image Retrieval”, In

Proceeding of the 8th ACM Intermational Conference on Multimedia, pp- 147-156, Oct. 2000.

39.M. Banerjee, M. K. Kundu and P. K. Das, “Image Retrieval with Visually Prominent Features using

Fuzzy set theoretic Evaluation”, ICVGIP, 2004.

40.Y. Rubner, L. J. Guibas and C. Tomasi, “The earth mover’s distance, multidimensional scaling, and

color-based image retrieval”, Proceedings of DARPA Image understanding Workshop. Pp- 661-668, 1997.

41.M. B. Rao, B. P. Rao, and A. Govardhan, “CTDCIRS: Content based Image Retrieval System based

on Dominant Color and Texture Features”, International Journal of Computer Applications, Vol. 18– No.6, pp-0975-8887, 2011.

42.J. M. Fuertes, M. Lucena, N. P. D. L Blanca and J. C. Martinez, “A Scheme of Color Image Retrieval

from Databases”, Pattern Recognition Vol. 22, No. 3, pp- 323-337, 2001.

43.Y. K. Chan and C. Y. Chen, “Image retrieval system based on color-complexity and color-spatial

features”, The Journal of Systems and Software, Vol. 71, pp-65-70, 2004.

44.T. Gevers, Color in image Database, Intelligent Sensory Information Systems, University of

Amsterdam, the Netherlands. 1998.

45.X. Wan and C. C. Kuo, “Color distribution analysis and quantization for image retrieval”, In SPIE

Storage and Retrieval for Image and Video Databases IV, Vol. SPIE 2670, pp- 9–16. 1996.

46.M. W. Ying and Z. HongJiang, “Benchmarking of image feature for content-based retrieval”, IEEE.

Pp-253-257, 1998.

47.Z. Zhenhua, L. Wenhui and L. Bo, “An Improving Technique of Color Histogram in Segmentation-

based Image Retrieval”, 2009 Fifth International Conference on Information Assurance and Security, IEEE, pp-381-384, 2009.

48. E. Mathias, “Comparing the influence of color spaces and metrics in content-based image retrieval”,

IEEE, pp- 371-378, 1998.

49.S. Manimala and K. Hemachandran, “Performance analysis of Color Spaces in Image Retrieval”,

Assam University Journal of science & Technology, Vol. 7 Number II 94-104, 2011.

50.S. Sural, G. Qian and S. Pramanik, “Segmentation and Histogram Generation using the HSV Color

Space for Image Retrieval”, IEEE- ICIP, 2002.

51.R. C. Gonzalez and R. E. Woods, Digital Image Processing, third ed., Prentice Hall. 2007.

52.W. H. Tsang and P. W. M. Tsang, “Edge gradient method on object color”, IEEE,. TENCON-Digital

Signal Processing Application, pp- 310–349, 1996.

53.X. Wan and C. C. J. Kuo, “A new approach to image retrieval with hierarchical color clustering”,

IEEE transactions on circuits and systems for video technology, Vol. 8, no. 5, 1998.

54.X. Wan and C. C. Kuo, “Image retrieval with multiresolution color space quantization”, In Electron

Imaging and Multimedia System, 1996.

55.J. R. Smith and S. F. Chang, “Tools and techniques for color image retrieval”, in: IST/SPIE-Storage

and Retrieval for Image and Video Databases IV, San Jose, CA, 2670, 426-437, 1996.

56.IEEE. IEEE standard glossary of image processing and pattern recognition terminology. IEEE

Standard. 610.4-1990. 1990.

57.J.R. Smith and S. Chang, “Transform Features for Texture Classification and Discrimination in Large

Image Databases. Proceeding”, IEEE International Conference on Image Processing, Vol. 3, pp-407-411, 1994.

58. B. Manjunath, P. Wu, S. Newsam and H. Shin, “A texture descriptor for browsing and similarity

retrieval”, Journal of Signal Processing: Image Communication, vol. 16, pp- 33-43, 2000.

59.R. Haralick, “Statistical and structural approaches to texture”, Proceedings of the IEEE, Vol. 67, pp.

786–804, 1979.

60.H. Tamura, S. Mori and T. Yamawaki, “Textural features corresponding to visual perception”, IEEE

Transactions. On Systems, Man and Cybern., Vol. 8, pp- 460-472, 1978.

61.R. C. Gonzalez, R. E. Woods and S. L, Eddins. Digital Image Processing Using MALAB, By Pearson

Education, 2008.

62. A. Haar. Zur Theorier der Orthogonalen Funktionensystem. Math. Annal. Vol. 69, pp-331-371, 1910.

63. C. E. Jacobas, A. Finkelstein and D. H. Salesin, “Fast Multiresolution image querying”, In Proc. Of

SiGGRaPH 95, Annual Conference Series, pp-277-286, 1995.

64.S. Manimala and K. Hemachandran, “Image Retrieval-Based on Color Histogram and performance

Evaluation of similarity Measurement”, Assam University Journal of science & Technology, Vol. 8 Number II 94-104, 2011.

65.H. A. Moghadam, T. Taghizadeh, A.H. Rouhi and M.T. Saadatmand, “Wavelet correlogram: a new

approach for image indexing and retrieval”, J. Elsevier Pattern Recognition, Vol. 38 pp-2006-2518, 2008.

66.https://www.doczj.com/doc/963012440.html,/docs/related/.

Authors

Ms. Manimala Singha received her B.Sc. and M.Sc. degrees in Computer Science from Assam University, Silchar in 2005 and 2007 respectively. Presently she is working, for her Ph.D., as a Research Scholar and her area of interest includes image segmentation, feature extraction, and image searching in large databases

Prof. K. Hemachandran is associated with the Dept. of Computer Science, Assam University, Silchar, since 1998. He obtained his M.Sc. Degree from Sri Venkateswara University, Tirupati and M.Tech. and Ph.D. Degrees from Indian School of Mines, Dhanbad. His areas of research interest are Image Processing, Software Engineering and Distributed Computing.

地理图表的判读

专题复习之“地理图表的判读” 山东省沂南第一中学 王 龙 276300 高考的“四大能力要求”之中,明确提出高考的主要考核目标之一——考查考生“获取和解读信息”的能力。而信息的获取来源地主要包括文字、图表、分值等等方面,其中从图表当中获取解读信息是整个信息获取的重要组成部分。纵观近几年的高考,从图表中获取信息的能力考查一直是高考的命题的重要理念。那么图表的类型主要有哪些?怎样利用你的慧眼恰当的获取并解读其中的信息呢?下面与笔者一起进入探究天地。 地理图表的主要类型见下图。 一、景观图类 景观图能较好地反映一个地区的突出特征。地理景观图多为地图事物的素描图或照片,包括自然景观图和人文景观图,如地形、矿物、植物、动物等自然景观图和人种、名胜古迹、工农业生产等人文景观图。 判读地理景观图的基本方法:(1)准确判读图中表示何种地理要素或要说明何种地理现象。(2)细心观察图中各地理要素的特征和相互关系。(3)根据图中信息,提取与设问相关的部分,联系相关知识作答。 (一)人文景观图 例题:(2014年全国新课标Ⅰ)太阳能光热电站(下图)通过数以十万计的反光板聚焦太阳能,给高塔顶端的锅炉加热,产生蒸汽,驱动发电机发电。据此完成1~3题。 1.我国下列地区中,资源条件最适宜建太阳能光热电站的是 A .柴达木盆地 B .黄土高原 C .山东半岛 D .东南丘陵 2.太阳能光热电站可能会 A .提升地表温度 B .干扰飞机电子导航 C .误伤途经飞鸟 D .提高作物产量 3.若在北回归线上建一太阳能光热电站,其高塔正午影长与塔高的比值为P ,则 电站远景反光板

A.春秋分日P=0 B.夏至日P=1 C.全年P<1 D.冬至日P>1 简析:该题辅以景观图,利于学生理解光热电站的含义。该电站为太阳能光热电站,反光板的主要目的是聚焦太阳能,使高塔锅炉加热。我国太阳能最丰富的地区从青藏地区继续向东北经过甘肃中部直到内蒙古中西部,单从资源条件来看,第1题选项中只有柴达木盆地符合光照条件。第2题,由于高塔的特殊性(高度大、温度高),容易吸引鸟类栖息,而误伤鸟类。第3题,考查正午太阳高度与影长的关系,冬至日时该地正午太阳高度小于45度,故影长与塔高之比大于1。 答案:1.A 2.C 3.D (二)自然景观图 例题:(2014年重庆文综)野外考察是发现和解决地理问题的重要方法。下图是某地理兴趣小组在野外考察中拍摄的照片。读图,完成4~6题。 4. 图中砾石滩常见于大河的 A.河源B.凸岸C.凹岸D.入海口 5. 下列地貌形态的形成作用中,有与图2中使砾石变圆的作用类似的是 A.石笋B.冰斗C.风蚀蘑菇D.花岗岩风化球6. 粒径分布是分析河流沉积物特性的重要指标。同学们绘制了四幅直径2cm以上的砾石粒 径统计图,其中能反映图2中粒径分布特征的是 简析:该题提供了一副自然景观图,为河滩砾石分布图。根据图中提示参照某一块砾石的直径为9厘米,可以看出该图中大部分砾石粒径都很大。由于粒径大,重量大,一般显现出的沉积作用明显。根据流水中沉积作用的特点(河口处粒径小,多为砂石;河流的凸岸多发生沉积;凹岸多为侵蚀),不难选出答案。图中砾石圆滑,是流水在携带过程中砾石的相互磨蚀,加上流水的侵蚀而形成。

图像处理中值滤波器中英文对照外文翻译文献

中英文资料对照外文翻译 一、英文原文 A NEW CONTENT BASED MEDIAN FILTER ABSTRACT In this paper the hardware implementation of a contentbased median filter suitabl e for real-time impulse noise suppression is presented. The function of the proposed ci rcuitry is adaptive; it detects the existence of impulse noise in an image neighborhood and applies the median filter operator only when necessary. In this way, the blurring o f the imagein process is avoided and the integrity of edge and detail information is pre served. The proposed digital hardware structure is capable of processing gray-scale im ages of 8-bit resolution and is fully pipelined, whereas parallel processing is used to m inimize computational time. The architecturepresented was implemented in FPGA an d it can be used in industrial imaging applications, where fast processing is of the utm ost importance. The typical system clock frequency is 55 MHz. 1. INTRODUCTION Two applications of great importance in the area of image processing are noise filtering and image enhancement [1].These tasks are an essential part of any image pro cessor,whether the final image is utilized for visual interpretation or for automatic an alysis. The aim of noise filtering is to eliminate noise and its effects on the original im age, while corrupting the image as little as possible. To this end, nonlinear techniques (like the median and, in general, order statistics filters) have been found to provide mo re satisfactory results in comparison to linear methods. Impulse noise exists in many p ractical applications and can be generated by various sources, including a number of man made phenomena, such as unprotected switches, industrial machines and car ign ition systems. Images are often corrupted by impulse noise due to a noisy sensor or ch annel transmission errors. The most common method used for impulse noise suppressi on n forgray-scale and color images is the median filter (MF) [2].The basic drawback o f the application of the MF is the blurringof the image in process. In the general case,t he filter is applied uniformly across an image, modifying pixels that arenot contamina ted by noise. In this way, the effective elimination of impulse noise is often at the exp ense of an overalldegradation of the image and blurred or distorted features[3].In this paper an intelligent hardware structure of a content based median filter (CBMF) suita ble for impulse noise suppression is presented. The function of the proposed circuit is to detect the existence of noise in the image window and apply the corresponding MF

图形图像处理教学计划

图形图像处理教学计划 一、课题分析 该软件是由美国ADOBE公司开发的一个集图像扫描、编辑修改、图像制作、广告创意、图像合成、图像输入/输出于一体的专业图像处理软件。adobe photoshop为美术设计人员提供了无限的创意空间,可以从一个空白的画面或从一幅现成的图像开始,通过各种绘图械具的配合命名用以图像调整方式的组合,在图像中任意调整颜色、明度、彩度、对比、甚至轮廓及图像;通过几十种特殊滤镜的处理,为作品增添变幻无穷的魅力。adobe photoshop设计的所有结果均可以输出到彩色喷墨打印机、激光打印机打印出来。当然也可以软拷贝至任何出版印刷系统。 二、教学目标 通过本课程的学习,要求学生能熟练掌握photoshop各种工具的操作,并且能应用到现实生活与工作中。如:抠图、调效果、搞合成、做特效;进而掌握平面设计、效果图后期与影楼后期等等行业的工作。 三、教学重点和难点 重点:抠图、调图、修图、特效、合成 难点:图层、路径、色彩、通道 四、提高教学质量的方法 1、课前做好充分的准备,对教材、对软件、对专业、对学员要做充分的分析。 2、多以实际结合,生活中的点滴都是我们学习的素材。

3、坚持教学以学为主,教为辅。 4、兴趣是最好的老师,重在培养学生对学习的兴趣。 5、鼓励学生自学,但老师得要监督,不然很可能会“走火入魔”! 五、教学进度和要求 时 间章节 课 次 授课内容教学目标 教学 重点 课 时 数 第一周第一 章 基础 知识 1 软件与课程介绍、界面、 文件管理、视图布局与操 作、选区基础 让学员了解PS 是做什么的、 认识界面并对 其进行布局、 学习文件管 理、选区基础 布局 与控 制视 图、 文件 管 理、 选区 2 2 选区的布尔运算、变换操 作(移动、缩放、旋转、 斜切、扭曲、透视、翻转) 让学员掌握选 区的布尔运算 并运用其抠出 比较复杂的 图;同时掌握 对选出对像的 变换操作与快 选区 的布 尔运 算、 变换 2

图像处理外文翻译 (2)

附录一英文原文 Illustrator software and Photoshop software difference Photoshop and Illustrator is by Adobe product of our company, but as everyone more familiar Photoshop software, set scanning images, editing modification, image production, advertising creative, image input and output in one of the image processing software, favored by the vast number of graphic design personnel and computer art lovers alike. Photoshop expertise in image processing, and not graphics creation. Its application field, also very extensive, images, graphics, text, video, publishing various aspects have involved. Look from the function, Photoshop can be divided into image editing, image synthesis, school tonal color and special effects production parts. Image editing is image processing based on the image, can do all kinds of transform such as amplifier, reducing, rotation, lean, mirror, clairvoyant, etc. Also can copy, remove stain, repair damaged image, to modify etc. This in wedding photography, portrait processing production is very useful, and remove the part of the portrait, not satisfied with beautification processing, get let a person very satisfactory results. Image synthesis is will a few image through layer operation, tools application of intact, transmit definite synthesis of meaning images, which is a sure way of fine arts design. Photoshop provide drawing tools let foreign image and creative good fusion, the synthesis of possible make the image is perfect. School colour in photoshop with power is one of the functions of deep, the image can be quickly on the color rendition, color slants adjustment and correction, also can be in different colors to switch to meet in different areas such as web image design, printing and multimedia application. Special effects production in photoshop mainly by filter, passage of comprehensive application tools and finish. Including image effects of creative and special effects words such as paintings, making relief, gypsum paintings, drawings, etc commonly used traditional arts skills can be completed by photoshop effects. And all sorts of effects of production are

地理统计图表的判读方法与技巧

专题三 地理统计图表的判读方法与技巧 重点难点剖析 一、统计图的类型 统计图包括面积图、饼状图、柱状图、折线图、曲线图、累计百分比图等。 1. 面积图: 适用于表示某地理事物的部分构成。 以全部面积表示地理事物的整体,各个部 分面积占总面积的比例表示该部分占地理 事物整体的比例。这种图表示两方面的内 容,一是表示地理事物由哪几部分组成; 二是表示各部分的相对比例关系。有圆形 和方形两种,圆形图又叫扇形图。如右图: 2. 饼状图: 是在面积图基础上的三维图表,它除了反映 面积图的两方面内容外,还可以反映某一地理事 物的量,从而可以从数量变化和比例关系变化方 面来反映事物的发展变化状况。如右图: 3. 柱状图(直线图): 适用于表示相对独立的地理事物的静态对比,一般是横坐标表示相对独立的各个地理事物,纵坐标表示数量。这种图把事物数量上的差异转化为柱(线)的长短高低,直观形象地反映出地理事物数量上的差异。如各大洲降水量、蒸发量和径流量的对比图。 4. 折线图和曲线图: 一般是横轴表示时间或位置,纵轴表示数量,适用于表示某地理事物数量随时间的动态变化,或随空间位置的数量变化,在坐标系中根据资料描绘出若干个点,相邻的两点连以直线(平滑曲线)。这种图能表示地理事物数量变化的总趋势,又能表示各时段、各地区的数量的变化特征。如人口增长、气温曲线图。 5. 累计百分比图: 横坐标表示时间,纵坐标表示由若干部分组成的某地理事物的整体,这样可以看出各部分所占比例互为消长的动态变化。如能源构成图: 思考:读右图回答: (1)我国能源消费存在的问题: ①人均能源消费量少。 ②能源利用效率低,单位产值能耗高。(原因: 管理不善,浪费严重) ③煤炭比重太大,容易污染环境,对环境 污染少的天然气、水电、核电比重太小。 (2)1950~1980年我国石油、煤铁的消费构 成的一圆滑的曲线,说明我国煤炭、石油的消 费构成有何变化?变化的原因是什么? ①煤炭比重下降,石油比重上升 ②原因:60年代,我国开发了石油油田(大 21 300 15 290 13 200 水能资源较丰富的国家 单位:(万千瓦) ① 哥伦比亚 ② 阿根廷 ③ 日本

2015年高考地理试题分项版解析专题19地理图形综合判读

2015年高考地理试题分项版解析专题19地理图形综合判读

2015年高考地理试题分项版解析专题19地理图形综合判读 (2015?新课标全国2)圣劳伦斯河(下图a)是一条著名的“冰冻之河”。下图b示意蒙特利尔年内各月气温和降水量。据此完成下列问题。 9、蒙特利尔附近河段结冰期大致为() A、2个月 B、3个月 C、4个月 D、5个月 10、据图示信息推测,冬季可能不结冰的河段位于() A、安大略湖至普雷斯科特河段 B、普雷斯科特至康沃尔河段 C、蒙特利尔至魁北克河段 D、魁北克以下河口段

11、减少该河凌汛危害的可行措施是() ①加深河道②绿化河堤③分段拦冰④拓宽河道 A、①② B、②③ C、③④ D、①④ 【答案】9.C 10.B 11.C 【解析】 试题分析: 9.根据蒙特利尔气候资料可以看出,月平均气温在0℃以下的月份从12月到次年3月,可以得出蒙特利尔河段结冰期为4 个月,答案选C。 10.普雷斯科特至康沃尔河段河流落差大,水流速度快,冬季不易出现结冰现象,故答案选B。 11.分段拦冰,可以避免浮冰过度聚集而抬高水位,通过拓宽河道,可以降低水位,以减少凌汛的发生,减少对沿岸地区造成危害,故答案选C。考点:本题组主要考查气候类型的综合运用、河流水文特征和凌汛的知识。 【名师点晴】本题组以加拿大圣劳伦斯河为材料抽样考查气候知识,河流水文特征,检测学生的知识的迁移应用能力。第一小题河流当月平均气温在0℃以下时开始出现结冰期,结合给出的气温变化曲线不难读出蒙特利尔市一年有四个月

平均气温在0℃以下;第二小题要注意调运学过的知识,影响河流结冰的因素除了气温以外还有水流速度、河水盐度等,当水流速度过快,即使在0℃以下,河水也很难结冰;盐度高,谁的结冰温度会降低。读图可以看出普雷斯科特至康沃尔河段,布局有水电站,说明河流落差大,水流急,河水不易结冰;第三小题要知道河流凌汛危害形成过程是浮冰不断累积,堵塞河道,抬高水位形成洪水,所以应对措施应想办法降低水位或加高堤坝,避免洪水溢出。 (2015?新课标全国1)海冰含盐量接近淡水,适当处理后可作为淡水资源。图3示意渤海及附近区域年平均气温≤-4℃日数的分布。据此完成下列小题。

图像处理中常用英文词解释

Algebraic operation 代数运算一种图像处理运算,包括两幅图像对应像素的和、差、积、商。 Aliasing 走样(混叠)当图像像素间距和图像细节相比太大时产生的一种人工痕迹。Arc 弧图的一部分;表示一曲线一段的相连的像素集合。 Binary image 二值图像只有两级灰度的数字图像(通常为0和1,黑和白) Blur 模糊由于散焦、低通滤波、摄像机运动等引起的图像清晰度的下降。 Border 边框一副图像的首、末行或列。 Boundary chain code 边界链码定义一个物体边界的方向序列。 Boundary pixel 边界像素至少和一个背景像素相邻接的内部像素(比较:外部像素、内部像素) Boundary tracking 边界跟踪一种图像分割技术,通过沿弧从一个像素顺序探索到下一个像素将弧检测出。 Brightness 亮度和图像一个点相关的值,表示从该点的物体发射或放射的光的量。 Change detection 变化检测通过相减等操作将两幅匹准图像的像素加以比较从而检测出其中物体差别的技术。 Class 类见模或类 Closed curve 封闭曲线一条首尾点处于同一位置的曲线。 Cluster 聚类、集群在空间(如在特征空间)中位置接近的点的集合。 Cluster analysis 聚类分析在空间中对聚类的检测,度量和描述。 Concave 凹的物体是凹的是指至少存在两个物体内部的点,其连线不能完全包含在物体内部(反义词为凸) Connected 连通的 Contour encoding 轮廓编码对具有均匀灰度的区域,只将其边界进行编码的一种图像压缩技术。 Contrast 对比度物体平均亮度(或灰度)与其周围背景的差别程度 Contrast stretch 对比度扩展一种线性的灰度变换 Convex 凸的物体是凸的是指连接物体内部任意两点的直线均落在物体内部。Convolution 卷积一种将两个函数组合成第三个函数的运算,卷积刻画了线性移不变系统的运算。 Corrvolution kernel 卷积核1,用于数字图像卷积滤波的二维数字阵列,2,与图像或信号卷积的函数。 Curve 曲线1,空间的一条连续路径,2 表示一路径的像素集合(见弧、封闭曲线)。 Deblurring 去模糊1一种降低图像模糊,锐化图像细节的运算。2 消除或降低图像的模糊,通常是图像复原或重构的一个步骤。 Decision rule 决策规则在模式识别中,用以将图像中物体赋以一定量的规则或算法,这种赋值是以对物体特征度量为基础的。 Digital image 数字图像 1 表示景物图像的整数阵列,2 一个二维或更高维的采样并量化的函数,它由相同维数的连续图像产生,3 在矩形(或其他)网络上采样一连续函数,并才采样点上将值量化后的阵列。 Digital image processing 数字图像处理对图像的数字化处理;由计算机对图片信息进

图像处理课程设计报告

图像处理课程设计报告 导语:设计是把一种设想通过合理的规划周密的计划通过各种感觉形式传达出来的过程。以下是XX整理图像处理课程设计报告的资料,欢迎阅读参考。 图像处理课程设计报告1 摘要:图像处理技术从其功能上可以分为两大类:模拟图像处理技术、和数字图像处理技术。数字图像处理技术指的是将图像信号直接转换成为数字信号,并利用计算机进行处理的过程,其主要的特点在于处理的精度高、处理的内容丰富、可以进行复杂、难度较高的处理内容。当其不在于处理的速度比较缓慢。当前图像处理技术主要的是体现在数字处理技术上,本文说阐述的图像处理技术也是以数字图像处理技术为主要介绍对象。数字图像处理又称为计算机图像处理,它是指将图像信号转换成数字信号并利用计算机对其进行处理的过程。近年来, 图像处理技术得到了快速发展, 呈现出较为明显的发展趋势, 了解和掌握这些发展趋势对于做好目前的图像处理工作具有前瞻性的指导意义。本文总结了现代图像处理技术的三点发展趋势。 对图像进行处理(或加工、分析)的主要目的有三个方面: (1)提高图像的视感质量,如进行图像的亮度、彩色变换,增强、抑制某些成分,对图像进行几何变换等,以改善图像的质量。(2)提取图像中所包含的某些特征或特殊信息,这些被提

取的特征或信息往往为计算机分析图像提供便利。提取特征或信息的过程是计算机或计算机视觉的预处理。提取的特征可以包括很多方面,如频域特征、灰度或颜色特征、边界特征、区域特征、纹理特征、形状特征、拓扑特征和关系结构等。 (3)图像数据的变换、编码和压缩,以便于图像的存储和传输。不管是 何种目的的图像处理,都需要由计算机和图像专用设备组成的图像处理系统对图像数据进行输入、加工和输出。 数字图像处理主要研究的内容有以下几个方面: 图像变换由于图像阵列很大,直接在空间域中进行处理,涉及计算量很大。因此,往往采用各种图像变换的方法,如傅里叶变换、沃尔什变换、离散余弦变换等间接处理技术,将空间域的处理转换为变换域处理,不仅可减少计算量,而且可获得更有效的处理。目前新兴研究的小波变换在时域和频域中都具有良好的局部化特性,它在图像处理中也有着广泛而有效的应用。 图像编码压缩图像编码压缩技术可减少描述图像的数据量,以便节省图像传输、处理时间和减少所占用的存储器容量。压缩可以在不失真的前提下获得,也可以在允许的失真条件下进行。编码是压缩技术中最重要的方法,它在图像处理技术中是发展最早且比较成熟的技术。

《计算机图像处理》课程标准

《计算机图像处理》课程标准 课程类别:专业核心课程 教学时数:56 学分:5 适用专业:电子商务 授课对象:一年级第一学期 制订人: 完成时间:2014-9-5 一、课程标准的制订依据 本课程是图文信息处理专业课程。通过本课程的学习,使学生掌握Photosop这个图像处理软件,该软件功能强大,广泛应用于印刷、广告设计、封面制作、网页图像制作、照片编辑等领域。利用Photosop可以对图像进行各种平面处理。绘制简单的几何图形、给黑白图像上色、进行图像格式和颜色模式的转换。培养学生对图像的处理技术,也为以后学习图像的排版与输出做基础。 二、课程性质与作用 《计算机图像处理》是电子商务专业核心课,也可作为其它移动传媒专业的拓展课。负责互联网电子商务涉及的知识、能力、素质等方面的培养,学生的职业岗位主要是面向视觉营销网页设计岗位的高技能应用型人才,具有知识运用的综合性,技能实做的复合性,理论与实践结合密切性等特点。 通过对Photoshop软件的讲授与学习,能够让学生达到熟练操作图像处理作的方法与灵活运用设计创作的基本要求,从而达到专业学习的基本要求和满足市场与社会发展的需求。 培养学生分析问题和解决问题的能力,培养他们的职业情感、职业态度、职业技能等综合职业能力和创新能力,为学生就业打好基础。 三、本课程与其他课程的关系

四、课程目标 1.课程地位 《计算机图像处理》是移动传媒学院电子商务专业的核心专业课程。 《计算机图像处理》是让学生掌握计算机图像基本知识的基础上,学习图像从到互联网广告设计的全过程相关的基础理论和专业技能。成为实践能力强、具有良好职业道德、可持续发展能力的高素质、高技能人才。《计算机图像处理》实行学习情境教学,把课程内容分解为若干学习情境,每一个学习情境中又含有若干个教学任务,学生在不断完成工作任务的过程中掌握知识并增长实际技能。 通过本课程的学习,训练学生的实际操作能力和工作经验,培养学生的团队合作精神、语言表达能力、决策能力、自学能力、客观评价能力、竞争意识、可持续发展能力等职业综合素质,为以后从事专业工作奠定基础。 2.课程定位 本课程以情景式教学为主体,教师讲授与学生自学结合,项目驱动教学法,仿真教学,案例教学方法,启发式教学方法,直观演示启发,多媒体教学,计算机情景实验教学方法。具备鉴赏设计作品的能力;具备动手设计图的设计能力;具备设计中解决问题的能力;具备设计与商业相结合的能力。重点培养学生分析与解决设计与视觉营销中各种工艺问题的能力。 3.专业能力 (1)鉴赏设计作品设计能力; (2)掌握动手设计网页能力; (3)具备解决设计构图的能力; (4)掌握设计色彩搭配能力; (5)设计与商业相结合的综合能力; (6)视觉营销能力; (7)设计管理协调管理能力。 并通过图像图形设计综合实训,使学生具备从事网页设计、视觉营销的综合应用能力。 4.社会能力 (1)具备符合电子商务图像设计的基本职业道德和职业素质。 (2)具备知识产权意识、质量意识、环境保护意识、节约意识,并能言行一致; (3)善于观察、发现和学习,能与团队成员共同协作、沟通、协商完成相关工作;

图形图像处理photoshopcs6授课计划清单

珠海城职学院(成教)双证教育中心学期教学授课(实训)计划 2016-2017 年第一学期 课程名称图形图像处理 授课班级16计算机应用 使用教材《中文版Photoshop CS6实例教程》 授课教师 授课计划审批 教务员年月日教务主任年月日 授课计划执行情况检查 检查日期授课计划执行情况检查人年月日 年月日

教学授课(实训)计划说明一、任教课程基本情况 所教学生所学专业计算机应用 课程性质专业课 新开课还是续开课新开课 本期计划布 7次 置 作业次数 二、课程目标(知识目标、能力目标、素质目标) 1.能力目标 ●掌握Photoshop图像处理软件的使用方法; ●能进行数码照片处理、色彩修饰; ●能绘制VI标志、图形等手绘作品; ●能制作图像特效、纹理图案等; ●能够完成广告版式、网页界面的设计制作; ●能够制作图文混排的广告招贴、海报等平面设计作品; ●能制作背景、按钮、标题等网页元素。 2.知识目标 ●熟悉图像文件类型、色彩模式的特点及应用; ●理解图层的概念和功能作用; ●理解选区、通道、蒙版的概念及应用特点,理解三者之间的关系; ●理解路径的概念、掌握路径工具的特点; ●熟悉图像处理工具、命令的功能及作用; ●理解滤镜的功能和应用特点。 3.素质目标 ●培养学生创新思维能力和健康的审美意识,提高作品的艺术鉴赏水平; ●培养学生诚实、守信、按时交付作品的时间观念; ●培养良好人际沟通能力和团队合作精神。

三、学情分析 本课程在“以岗位能力为核心”的计算机技术与应用课程体系中处理于重要地位,本课程培养学生数码照片处理、广告图像处理、VI图形绘制、网页图像处理等技能,达到“会、熟、快、美”岗位要求;培养学生创新思维能力和健康的审美意识,培养学生按时交作业的时间观念和团队合作精神,为其成长为一名合格的广告设计与制作人员奠定良好的基础。 本课程是计算机技术与应用专业必须掌握的职业技能,学完本课程后学生完全能够胜任数码照片处理、广告图像处理、VI图形绘制、网页图像处理等职业岗位,为学生考取广告设计师、图像制作员等职业资格证书打下基础。 四、教学主要任务和要求 教学 项目名称工作任务(模块/单元) 划分 教学要求 知识技能内容学习目标 项目一课程定位与图片赏析任务一 欣赏艺术照片 ●了解本课程的 培养目标; ●了解课程在专 业人才培养方 案中的定位与 作用; ●理解矢量图和 位图的概念; ●熟悉PS的操作 界面。 ●通过对不同岗位、不同的精美图片赏析, 激发学生的学习兴趣,使学生主动参与 到图像处理的工作任务中来; ●能够定制和优化合适的工作环境,适应 自己的工作习惯; ●掌握不同图像文件格式的保存方法; ●能够设置图像文件的分辨率、打印尺寸, 能够调整画布大小; ●掌握图像文件基本操作。 任务二 欣赏广告作品 任务三 欣赏VI作品 任务四 欣赏网页 项目二人物数码照片处理任务五 欣赏儿童相册 ●选框工具和套 索工具 ●图像的移动与 复制 ●度量工具 ●裁剪工具 ●选择区域的运 ●培养学生的创新能力,能够运用美学知 识设计照片版面; ●培养学生认真仔细的工作态度; ●能综合运用选框工具、套索工具、移动 工具和选择命令对数码照片进行合成处 理; ●能运用度量工具、裁剪工上瓮城照片进任务六制作简单儿 童艺术照片 任务七 更换儿童照片背景 任务八

图像处理英文翻译

数字图像处理英文翻译 (Matlab帮助信息简介) xxxxxxxxx xxx Introduction MATLAB is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. Using the MATLAB product, you can solve technical computing problems faster than with traditional programming languages, such as C, C++, and Fortran. You can use MATLAB in a wide range of applications, including signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB functions, available separately) extend the MATLAB environment to solve particular classes of problems in these application areas. The MATLAB system consists of these main parts: Desktop Tools and Development Environment This part of MATLAB is the set of tools and facilities that help you use and become more productive with MATLAB functions and files. Many of these tools are graphical user interfaces. It includes: the

《PhotoShop》教学计划

《PhotoShop》教学计划 一、指导思想 1、以国家职业标准为依据,培养合格的中级技能人才。 2、坚持理论与实践相结合,突出职业技能训练,注重对学生分析问题、解决问题能力的培养。 3、体现现代科学技术的发展,突出教学的科学性和先进性,注意学生素质的全面提高。 二、培养目标 通过本课程的实训,要求学生基本掌握制作广告招贴、婚纱艺术照片、书籍装帧、网络底纹和特效文字等方面的技术。初步熟悉广告设计和图像合成的设计理念和开发技巧。 建议能通过OSTA计算机图形图像处理(Photoshop)中级技能鉴定三、实习教学计划表

说明: 1、软件菜单和命令的讲解要少而精,重点要结合实际应用技巧来讲,每一项功能都要结合案例来讲,即采用案例教学法。教师应选择一些经典的案例做示

范操作,引导学生举一反三,重视课内的实际操作练习和辅导,使学生逐渐养成应用快捷键的操作习惯。注意调动学生的学习兴趣和创造性。 2、对本实习教学计划的课题,可根据实际情况适当调整上课所在的学期。 四、实习装备及师资要求 (一)实习装备 硬件设备要求:本课程要求上课时每生一台计算机,该计算机能满足中文Windows98以上系统及Photoshop6.0系统的运行要求,要求机房配置投影仪。软件设备要求:本课程要求的软件为中文Windows98以上的操作系统及Photoshop6.0以上的系统;准备与课程案例相关的素材。 (二)师资 实习指导教师要具备《Photoshop7.0》中级以上技能水平,有一定平面设计经验。 每20名学生应配备一名实习指导教师。

《PhotoShop》教学计划 张兰云

地理分布图的判读方法与技巧

地理分布图的判读方法与技巧 纵观近几年的高考地理试题,读图分析类的题型在高考试题中所占分值比重很大。很多试题都是“以图入题”,几乎是“题题有图,无图不成题”。经常是在一道大题中,图表多达3—4幅的现象,可以说是将地理的图表功能、图文结合的特色演绎得淋漓尽致。地理新课程目标之一是“培养现代公民必备的地理素养”,如果一个高中毕业生对中国和世界的基本状况没有一个清晰的“图”的概念,在接触到地理事物时不注意先去判明其区位,对各类地理事物之间的关系不能作出判断,那当然不能算完全合格。地理图像是地理的第二语言,它能够提供比文字描述更为丰富的地理信息,由于从地理图像中可以进行多层次地获取与分析信息、归纳与评价信息,因而能够很好地考查学生的空间能力、逻辑思维能力、分析统计图表的能力,因此地理图表的判读技能是中学地理教学的一个重点,也是历届高考文综地理试题的一个最大特色。 地理分布图在常考地理图表中所占比例大,以分布图为信息载体,侧重于考查学生对材料分析处理能力、地理空间位置判断能力和地理原理迁移应用能力。 一、应在复习中帮助学生梳理地理分布图的类型 地理分布图类型复杂多样,首先,做到心中有数。按照地理事物在图形中分布的范围和形状,将分布图分为以下三种类型。 1.点状地理分布图 点状地理分布图表示的地理事物是标定在图上离散的点。①表示分布地点:如我国水电站的分布(只表示其空间位置)。②表示类别:用不同图例区分,如我国有色金属矿的分布。③表示数量:用定位符号的大小来区分(或同时在符号旁边注明绝对数量)。 2.线状地理分布图 线状地理分布图常用线状符号来表示交通线、河流、洋流、等值线等。读线状地理分布图要注意事物的起止点、事物的沿途变化和事物的走向。线状地理分布图注重同一空间不同地理知识之间的联系,认识不同事物之间的内在联系,形成地理事物空间位置和相互影响的结构体系。 3.面状地理分布图 面状地理分布图表示地形区、气候区、农业地域、工业区等地理事物。①表示分布范围:在地图中用封闭的界线,或用一种图例把具有某种专题事物的分布范围画出来,表示呈面状分布的某种事物的分布。②表示类别:在某种地理事物的分布范围内,用不同的颜色或图例,在各个范围内用不同图形加以区分。③表示数量:常用散点法(点数多少)、等值线法。读面状地理分布图,要明确事物

地理图片常见类型及判读分析

地理图片常见类型及判读分析 一、地理平面图 平面图是用来表现地理事物的空间联系,空间结构及分布规律的最主要方式。如反映我国的地形、河流、气候类型、农作物、纺织工业城市、铁路干线等各种地理要求素的专题地图,或反映某地区各种自然地理和人文地理因素综合影响地理环境的综合图。利用平面图可以表现或分析自然地理要素之间的联系,即地理位置、地形、气候、河流、植被和土壤之间相互联系,如我国西北沙漠环境的形成与其特殊的地理位置、地形闭塞等条件密切相关。也可以表现人文地理要求之间的关系,如我国棉纺织工业基地的形成与棉花产区有着密切的联系。更可以反映自然条件与生产活动之间的联系,如我国甘蔗制糖业与高温多雨气候之间的联系。地图有助于说明某种地理事物的空间结构如分布规律,如表现我国内河运输与沿海港口的水运图,或表现世界粮食,能源,铁矿等大宗物质的运输路线图。用平面图无疑也是考查学生对空间概念、空间联系、空间规律等理解分析能力的重要方式之一。解题时一般要注意: ①填注地名时位置要到位准确,必要时用箭头指向;绘线时要注意起点、终点及走向的准确度,圈画范围时要尽可能采用闭合的曲线。 ②要注意图示区域比较典型的地理问题及原因,展开联想和综合归纳。 例1 读黄河流域图,完成下列要求:

(1)在图上勾绘出黄河上游两个主要农业用水区域,用字母A、B分别注明。 并指出,A是平原;B是平原。(2)黄河上游干流沿岸工业、生活用水量大的城市 有、、等。 (3)简答:近20年,黄河下游面临的主要问题及针对这些问题应该采取的治理措施。 [解析]本题主要是针对黄河流域用水紧张的现实来设计提问,现实性与地理性有机结合,第一问并不难,但不少考生不善于画图(闭合曲线)和标图(紧靠图示区域)而失分者极多。如图所示。 第二问需展开联想、耗水量多,一般是指大的工业城市,但限定了“黄河上游”,那么主要是兰州、银川、石嘴山、包头、白银等。 黄河下游面临的主要是三个问题,一是河床由于泥沙不断淤积进一步升高,汛期水患严重。二是土壤次生盐碱化,三是旱季黄河断流。解决的办法是:①加强中游地区的植树种草,水土保持工作;②加固黄河下游大堤、营造护堤林;③加强黄河全流域水资源利用的管理,节约用水。

2021届高考地理二轮复习常见地理图表判读技能提升训练2含解析人教版.doc

技能提升训练(二) (2020·广东惠州)横断山区是川藏铁路康定至林芝段必经之地,其中三江并流区发育有三级夷平面,是横断山区抬升过程中,由各种夷平作用形成的起伏平缓的陆地平面。下图为三江并流区夷平面横剖面图。据此完成1~3题。 1.每一级夷平面主要由( D ) A.地壳运动形成B.火山喷发形成 C.风化作用形成D.侵蚀、堆积作用形成 2.最早形成的是( A ) A.高原面夷平面B.高原盆地夷平面 C.高原谷地夷平面D.高原深切河谷区 3.通过该区的川藏铁路最适宜布局在( B ) ①高原面夷平面②高原盆地夷平面③高原谷地夷平面④高原深切河谷区 A.①②B.②③ C.③④D.①④ 【解析】第1题,地壳运动是横断山区形成的主要原因,却不是夷平面形成的原因,故A不选。该地位于欧亚(亚欧)板块与印度洋板块的消亡边界,多火山、地震,有些山体地形地貌的确是由火山喷发形成,但却不是每一级夷平面形成的原因,故B项不选。风化作用,分为三种,物理风化、化学风化、生物风化,风化主要是指岩石发生破坏和改变,并不是这种地貌形成的主要原因,故C项不选。该系列小题的解答,首先要读懂“夷平面”三个字的具体内涵。所谓的“夷平面”,即指在各种夷平作用形成的陆地平面。“夷”,在现代汉语语境中有“平坦,使平坦”之意,所以“夷平面”,即削高填洼逐渐变为平面。所以该地的每一级夷平面主要由侵蚀、堆积(夷平)作用形成的。故选D。第2题,地面经过长期剥蚀、堆积(夷平)作用,形成准平原;在这之后,地壳抬升,准平原受切割破坏,残留在山顶或山坡上的准平原,称为夷平面。所以最早形成的应该是位置最高的,最早形成的高原夷平面,故选A项。其他的高原盆地、高原谷地夷平面、高原深切河谷区,原则上都晚于高原夷平面,它们依次形成的顺序是:高原夷平面、高原谷地夷平面、高原深切河谷区,故B、C、D三项错误。故选A。第3题,川藏铁路沿线地区多处于横断山区和青藏高原地区,海拔高、地形崎岖,居民点(城市)分布主要在山间谷地和盆地,为了照顾沿线经济发展,铁路线多经过其间的居民点。其次,穿过谷地、盆地,海拔适中,线路只需要适当起伏(坡度),就可以尽量

相关主题
文本预览
相关文档 最新文档