外文翻译:计算机图像处理过程及颜色分析
- 格式:doc
- 大小:93.01 KB
- 文档页数:5
数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。
主要学习内容1.数字图像处理( Digital Image Processing )又称为计算机图像处理, 它是指将图像信号转换成数字信号并利用计算机对其进行处理的过程, 以提高图像的实用性, 从而达到人们所要求的预期结果。
例如: 对照片反差进行变换;对被噪声污染的工业电视图像去除噪声;从卫星图片中提取目标物特征参数等等。
与人类对视觉机理着迷的历史相比, 数字图像处理还是一门相对年轻的学科。
但在其短短的历史中, 它却以程度不同的成功被应用于几乎所有与成像有关的领域。
由于其表现方式(用图像显示)所固有的魅力, 它几乎吸引了从科学家到平民百姓太多的注意。
几个新的技术发展趋势将进一步刺激该领域的成长: 包括由低价位微处理器支持的并行处理技术;用于图像数字化的低成本的电荷耦合器件(CCD);用于大容量、低成本存储阵列的新存储技术;以及低成本、高分辨的彩色显示系统。
另一个推动力来自于稳定涌现出的新的应用。
在商业、工业、医学应用中, 数字成像技术的使用持续增长。
尽管军费在削减, 在遥感成像中却更多地使用了数字图像处理技术。
低成本的硬件加上正在兴起的几个非常重要的应用, 我们可以预料到数字图像处理在将来会发挥更重要的作用。
2.图像增强技术图像增强是指按特定的需要突出一幅图像中的某些信息, 同时, 削弱或去除某些不需要的信息的处理方法。
其主要目的是处理后的图像对某些特定的应用比原来的图像更加有效。
图像增强技术主要包含直方图修改处理、图像平滑化处理、图像尖锐化处理和彩色处理技术等。
空间域平滑技术为了抑制噪声改善图像质量所进行的处理称为图像平滑或去噪。
它可以在空间域或频率域中进行。
此处介绍空间域的几种平滑方法。
(1)局部平滑法局部平滑发又称邻域平均法或移动平均法。
它是利用像素邻域内的各像素的灰度平均值代替该像素原来的灰度值, 实现图像的平滑。
邻域平均法是将当前像素邻域内各像素的灰度平均值作为其输出值的去噪方法。
其作用相当于用这样的模板同图像卷积。
英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up forthe edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to alabel image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed bygraphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。
计算机图像处理的基本方法与应用计算机图像处理(Computer Image Processing)是指利用计算机技术对图像进行处理和分析的一门学科。
它包括了图像获取、图像处理、图像分析和图像显示等方面。
本文将介绍计算机图像处理的基本方法和应用。
以下是详细的步骤和分点:一、图像获取1. 数字相机:通过数码相机或手机等设备获取图像。
2. 扫描仪:通过扫描纸质照片或文件来获取数字图像。
二、图像处理1. 图像预处理a. 去噪:通过滤波算法去除图像中的噪声。
b. 增强:通过调整图像的对比度、亮度等参数来增强图像的清晰度和视觉效果。
c. 校正:校正图像的几何畸变,如图像的旋转或透视变换等。
2. 图像分割a. 阈值分割:基于像素的灰度值与设定阈值进行比较,将像素分为不同的类别。
b. 区域生长:通过确定种子点和生长准则将相邻的像素分为不同的区域。
c. 边缘检测:通过检测图像中灰度变化较大的区域来提取图像的边缘。
3. 特征提取a. 形状特征:提取图像中不同物体的形状特征,如周长、面积等。
b. 纹理特征:提取图像中不同物体的纹理特征,如灰度共生矩阵等。
c. 频域特征:通过傅里叶变换或小波变换等方法提取图像的频域特征。
4. 图像恢复a. 图像去模糊:通过估计图像退化模型和逆滤波方法对模糊图像进行恢复。
b. 图像插值:通过像素插值方法对低分辨率图像进行恢复。
三、图像分析1. 目标检测a. 物体检测:使用机器学习或深度学习方法对图像中的物体进行检测和识别。
b. 人脸检测:通过特征提取和分类器识别图像中的人脸。
2. 图像分类a. 监督学习:使用有标签的训练数据来训练分类器,并根据图片特征将图像分为不同的类别。
b. 无监督学习:使用无标签的训练数据,根据数据的相似性将图像进行聚类,自动分为不同的类别。
3. 图像配准a. 点对点匹配:通过找到两个图像中共有的特征点,并计算相应的相似度矩阵来实现图像配准。
b. 区域匹配:将两个图像划分为小区域,在区域中进行相似度匹配,并通过优化算法找到最佳配准结果。
GLCM:灰度共生矩阵indexing service:索引服务Binary image二值图像;只有两级灰度的数字图像(通常为0和1,黑和白)Blur 模糊;由于散焦、低通滤波、摄像机运动等引起的图像清晰度的下降。
Shape:形状,shape from texture,纹理形状mathematical morphology: 数学形态学Border 边框;一副图像的首、末行或列。
Boundary chain code 边界链码;定义一个物体边界的方向序列。
Boundary pixel 边界像素;至少和一个背景像素相邻接的内部像素Boundary tracking 边界跟踪;一种图像分割技术,通过沿弧从一个像素顺序探索到下一个像素将弧检测出。
Brightness 亮度;和图像一个点相关的值,表示从该点的物体发射或放射的光的量。
Change detection 变化检测;通过相减等操作将两幅匹准图像的像素加以比较Closed curve 封闭曲线;一条首尾点处于同一位置的曲线。
Cluster 聚类、集群;在空间(如在特征空间)中位置接近的点的集合。
Cluster analysis 聚类分析;在空间中对聚类的检测,度量和描述。
Concave 凹的;物体是凹的是指至少存在两个物体内部的点,其连线不能完全包含在物体内Connected 连通的神经网络:neural networkContour encoding 轮廓编码;对具有均匀灰度的区域,只将其边界进行编码的一种图像压缩技术。
Contrast 对比度;物体平均亮度(或灰度)与其周围背景的差别程度Contrast stretch 对比度扩展;一种线性的灰度变换Convolution 卷积;一种将两个函数组合成第三个函数的运算,卷积刻画了线性移不变系统的运算。
Deblurring 去模糊;1一种降低图像模糊,锐化图像细节的运算。
2 消除或降低图像的模糊,通常是图像复原或重构的一个步骤。
The research of digital image processing technique1IntroductionInterest in digital image processing methods stems from two principal application areas:improvement of pictorial information for human interpretation;and processing of image data for storage,transmission,and representation for autonomous machine perception.1.1What Is Digital Image Processing?An image may be defined as a two-dimensional function,f(x,y),where x and y are spatial(plane)coordinates,and the amplitude of f at any pair of coordinates(x,y)is called the intensity or gray level of the image at that point.When x,y,and digital image.The field of digital image processing refers to processing digital images by means of a digital computer.Note that a digital image is composed of a finite number of elements,each of which has a particular location and value.These elements are referred to as picture elements,image elements,pels,and pixels.Pixel is the term most widely used to denote the elements of a digital image.We consider these definitions in more formal terms in Chapter2.Vision is the most advanced of our senses,so it is not surprising that images play the single most important role in human perception.However,unlike human who are limited to the visual band of the electromagnetic(EM)spectrum,imaging machines cover almost the entire EM spectrum,ranging from gamma to radio waves.They can operate on images generated by sources that human are not accustomed to associating with image.These include ultrasound,electron microscopy,and computer-generated images.Thus,digital image processing encompasses a wide and varied field of application.There is no general agreement among authors regarding where image processing stops and other related areas,such as image analysis and computer vision,start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images.We believe this to be a limiting and somewhat artificial boundary.For example,under this definition,even the trivial task of computing the average intensity of an image(which yields a single number)would not be considered an image processing operation.On the other hand, there are fields such as computer vision whose ultimate goal is to use computer to emulate human vision,including learning and being able to make inferences and take actions based on visual inputs.This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence.This field of AI is in its earliest stages of infancy in terms of development,with progress having been much slower than originally anticipated.The area of image analysis(also called image understanding)is in between image processing and computer vision.There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other.However,one useful paradigm is to consider three types of computerized processes is this continuum:low-,mid-,and high-ever processes.Low-level processes involve primitive operation such as image preprocessing to reduce noise,contrast enhancement,and image sharpening.A low-level process is characterized by the fact that both its input and output are images. Mid-level processing on images involves tasks such as segmentation(partitioning an image into regions or objects),description of those objects to reduce them to a form suitable for computer processing,and classification(recognition)of individual object. Amid-level process is characterized by the fact that its inputs generally are images, but its output is attributes extracted from those images(e.g.,edges contours,and the identity of individual object).Finally,higher-level processing involves“making sense”of an ensemble of recognized objects,as in image analysis,and,at the far end of the continuum,performing the cognitive function normally associated with vision. Based on the preceding comments,we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image.Thus,what we call in this book digital image processing encompasses processes whose inputs and outputs are images and,in addition, encompasses processes that extract attributes from images,up to and including the recognition of individual objects.As a simple illustration to clarify these concepts, consider the area of automated analysis of text.The processes of acquiring an image of the area containing the text.Preprocessing that images,extracting(segmenting)the individual characters,describing the characters in a form suitable for computer processing,and recognizing those individual characters are in the scope of what we call digital image processing in this book.Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement“making cense.”As will become evident shortly,digital image processing,as we have defined it,is used successfully in a broad rang of areas of exceptional social and economic value.The concepts developed in the following chapters are the foundation for the methods used in those application areas.1.2The Origins of Digital Image ProcessingOne of the first applications of digital images was in the newspaper industry,when pictures were first sent by submarine cable between London and NewYork. Introduction of the Bartlane cable picture transmission system in the early1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours.Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end.Figure 1.1was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern.Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution ofintensity levels.The printing method used to obtain Fig.1.1was abandoned toward the end of1921in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal.Figure1.2shows an images obtained using this method.The improvements over Fig.1.1are evident,both in tonal quality and in resolution.FIGURE1.1A digital picture produced in FIGURE1.2A digital picture 1921from a coded tape by a telegraph printer made in1922from a tape punched With special type faces(McFarlane)after the signals had crossed theAtlantic twice.Some errors areVisible.(McFarlane)The early Bartlane systems were capable of coding images in five distinct level of gray.This capability was increased to15levels in1929.Figure1.3is typical of the images that could be obtained using the15-tone equipment.During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process considerably. Although the examples just cited involve digital images,they are not considered digital image processing results in the context of our definition because computer were not involved in their creation.Thus,the history of digital processing is intimately tied to the development of the digital computer.In fact digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computers of supporting technologies that include data storage,display,and transmission.The idea of a computer goes back to the invention of the abacus in Asia Minor, more than5000years ago.More recently,there were developments in the past two centuries that are the foundation of what we call computer today.However,the basis for what we call a modern digital computer dates back to only the1940s with the introduction by John von Neumann of two key concepts:(1)a memory to hold a stored program and data,and(2)conditional branching.There two ideas are the foundation of a central processing unit(CPU),which is at the heart of computer today. Starting with von Neumann,there were a series of advances that led to computers powerful enough to be used for digital image processing.Briefly,these advances maybe summarized as follow:(1)the invention of the transistor by Bell Laboratories in1948;(2)the development in the1950s and1960s of the high-level programminglanguages COBOL(Common Business-Oriented Language)and FORTRAN (Formula Translator);(3)the invention of the integrated circuit(IC)at Texas Instruments in1958;(4)the development of operating system in the early1960s;(5)the development of the microprocessor(a single chip consisting of the centralprocessing unit,memory,and input and output controls)by Inter in the early 1970s;(6)introduction by IBM of the personal computer in1981;(7)progressive miniaturization of components,starting with large scale integration(LI)in the late1970s,then very large scale integration(VLSI)in the1980s,to the present use of ultra large scale integration(ULSI).Figure1.3In1929from London to Cenerale Pershingthat New York delivers with15level tone equipmentsthrough cable with Foch do not the photograph by decorationConcurrent with these advances were development in the areas of mass storage and display systems,both of which are fundamental requirements for digital image processing.The first computers powerful enough to carry out meaningful image processing tasks appeared in the early1960s.The birth of what we call digital image processing today can be traced to the availability of those machines and the onset of the apace program during that period.It took the combination of those two developments to bring into focus the potential of digital image processing concepts.Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory(Pasadena,California)in1964when pictures of the moontransmitted by Ranger7were processed by a computer to correct various types of image distortion inherent in the on-board television camera.Figure1.4shows the first image of the moon taken by Ranger7on July31,1964at9:09A.M.Eastern Daylight Time(EDT),about17minutes before impacting the lunar surface(the markers,called reseau mark,are used for geometric corrections,as discussed in Chapter5).This also is the first image of the moon taken by a U.S.spacecraft.The imaging lessons learned with ranger7served as the basis for improved methods used to enhance and restore images from the Surveyor missions to the moon,the Mariner series of flyby mission to Mars,the Apollo manned flights to the moon,and others.In parallel with space application,digital image processing techniques began in the late1960s and early1970s to be used in medical imaging,remote Earth resources observations,and astronomy.The invention in the early1970s of computerized axial tomography(CAT),also called computerized tomography(CT)for short,is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors encircles an object(or patient)and an X-ray source,concentric with the detector ring,rotates about the object.The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring.As the source rotates,this procedure is repeated.Tomography consists of algorithms that use the sensed data to construct an image that represents a“slice”through the object.Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices,which constitute a three-dimensional(3-D)rendition of the inside of the object.Tomography was invented independently by Sir Godfrey N.Hounsfield and Professor Allan M. Cormack,who shared the X-rays were discovered in1895by Wilhelm Conrad Roentgen,for which he received the1901Nobel Prize for Physics.These two inventions,nearly100years apart,led to some of the most active application areas of image processing today.Figure1.4The first picture of the moon by a U.S.Spacecraft.Ranger7took this image on July31,1964at9:09A.M.EDT,about17minutes beforeImpacting the lunar surface.(Courtesy of NASA.)中文翻译数字图像处理方法的研究1绪论数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进;其二是为了使机器自动理解而对图像数据进行存储、传输及显示。
colorful image colorization算法原理概述及解释说明1. 引言1.1 概述本文旨在介绍colorful image colorization算法的原理,并对其进行解释和说明。
随着计算机技术的不断发展,图像处理领域也取得了长足的进步。
图像着色是图像处理中一个重要的任务,它可以将灰度图像转化为色彩丰富的彩色图像,使人们能够更好地理解和感受图像所传达的信息。
Colorful image colorization算法是一种基于深度学习的方法,利用神经网络模型实现自动化的图像着色过程。
通过训练大量数据集,算法可以学习到图像中不同区域之间的颜色关系,并生成与原始灰度图像相匹配的彩色版本。
该算法在计算机视觉、数字媒体和艺术设计等领域具有广泛应用前景。
1.2 文章结构本文首先会对colorful image colorization算法进行概述,介绍其基本原理和实现方式。
然后详细解释算法的输入与输出,并说明其核心原理。
接下来,在第三部分中会对算法进行详细解释和说明,包括预处理步骤、网络架构以及数据训练与优化策略。
第四部分将介绍实验结果与评估方法,包括数据集的选择和准备、定量评估指标和方法,以及对实验结果进行分析和讨论。
最后,本文将总结全文内容并给出相关结论。
1.3 目的本文的目的是全面阐述colorful image colorization算法的原理和实现。
通过对该算法进行概述和解释说明,读者可以充分了解其基本原理、输入输出以及核心实现方式。
同时,通过对算法进行详细解释和优化策略说明,读者可以了解到如何使用该算法进行图像着色,并在实际应用中取得良好效果。
最后,通过对实验结果与评估的介绍,读者可以对该算法在不同场景下的表现有一个全面的了解。
2. colorful image colorization算法原理:2.1 算法概述:colorful image colorization算法是一种将黑白图像转化为彩色图像的技术。
第 1 页中英文对照资料外文翻译文献原 文To image edge examination algorithm researchAbstract :Digital image processing took a relative quite young discipline,is following the computer technology rapid development, day by day obtains th widespread application.The edge took the image one kind of basic characteristic,in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widesp application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develo the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainlyhas Robert, Laplacian, Sobel, Canny, operators and so on LOG 。
附录1外文资料Typically, an image-processing application consists of five steps. First, an image must be acquired. A digitized representation of the image is necessary for further processing. This is denoted with a two-dimensional function I(x,y)that is described with an array. X marks a column and y a row of the array. The domain for x and y depends on the maximal resolution of the image. If the image has size n ⨯m, whereby n represents the number of rows and m the number of columns, then it -holds for x that 0 ≤x <m, and for the y analog, 0≤y < n. xis the and y are positive integers or zero. This holds also for the domain of I(x,y)mas maximal value for the function value. This then provides the domain, 0 ≤I(x, y) ≤I(x;. Every possible discrete function value represents a gray value and is called a pixel.y)maxSubsequent preprocessing tries to eliminate disturbing effects. Examples are inhomogeneous illumination noise, and movement detection.If image-preprocessing algorithms like the movement detection are applied to an image, it is possible that image pixels of different objects with different properties are merged into regions, because they fulfill the criteria of the preprocessing algorithm. Therefore, a region can be considered as the accumulation of coherent pixels that must not have any similarities. These image regions or the whole image can be decomposed into segments. All contained pixels must be similar in these segments.Pixels will be assigned to objects in the segmentation phase, which is the third step. If objects are isolated from the remainder of the image in the segmentation phase, feature values of these objects must be acquired in the fourth step. The features determined are used in the fifth and last step to perform the classification. This means that the detected objects are allocated to an object class if their measured feature values match to the object description. Examples for features are the object height, object width, compactness, and circularity.A circular region has the compactness of one. The alteration of the region‘s length effects the alteration of the compactness value. The compactness becomes larger if the region‘s length rises. An empty region has value zero for the compactness.Color ModelsThe process of vision by a human being is also controlled by colors. This happens subconsciously with signal colors. But a human being searches in some situations directly for specified colors to solve a problem. The color attribute of an object can also be used in computer vision. This knowledge can help to solve a task .For example, a computer-vision application that is developed to detect people can use knowledge about the color of the skinfor the detection. This can affect ambiguity in some situations. For example, an image that is taken from a human being who walks beside a carton is difficult to detect, if the carton has a similar color to the color of the skin.But there are more problems. The color attributes of objects can be affected by otherobjects due to light reflections of these objects. Also colors of different objects that belong to the same class, can vary. For example, a European has a different skin color from an African a lthough both belong to the class “human being”. Color attributes like hue, saturation, intensity, and spectrum can be used to identify objects by its color. Alterations of these parameters can effect different reproductionsof the same object. This is often very difficult to handle in computer-vision applications. Such alterations are as a rule for a human being no or only a small problem for recognition. Theselection of an appropriate color space can help in computer vision. Several color spaces exist. Two often-used color spaces are now depicted. These are RGB and YUV color spaces. The RGB color space consists of three color channels. These are the red, green, and blue channels. Every color is representedby its red, green, and blue parts. This coding follows the three-color theory of Gauss. A pixel ’s color part of a channel is often measured within the interval [0; 255]. Therefore, acolor image consists of three gray images. The RGB color space is not very stable with regard to alterations in the illumination, because the representation of a color with the RGB color space contains no separation between the illumination and the color parts. If acomputer-vision application, which performs image analysis on color images, is to be robust against alterations in illumination, the YUV color space could be a better choice, because the color parts and the illumination are represented separately. The color representation happens only with two channels, U and V . Y channel measures the brightness. The conversion between the RGB and the YUV color space happens with a linear transformation:=⎪⎪⎪⎭⎫ ⎝⎛V U Y ⎪⎪⎪⎭⎫ ⎝⎛----101.0436.0114.0514.0289.0587.0615.0147.0299.0 ⎪⎪⎪⎭⎫ ⎝⎛B G R (1-1)This yields the following equations:Y=0.299R+0.587G+0.114B (1-2)U=-0.147R-0.289G+0.436B (1-3)V=0.615R-0.514G-0.101B (1-4)To show the robustness of the YUV color space with regard to the illumination, the constant c will be added to the RGB color parts. Positive c effects a brighter color impression andnegative c a darker color impression. The constant c affects only the brightness Y and not the color parts U and V in the YUV color space if a transformation into the YUV color space is performed:Y ()c B c G c R +++,,=Y ()B G R ,,+C (1-5) U ()c B c G c R +++,,=U ()B G R ,,+C (1-6) V ()c B c G c R +++,,=V ()B G R ,,+C (1-7) The sum of the weights in Equations (1.3) and (1.4) is zero. Therefore, the value of the constant c in the color parts is mutually cancelled. The addition of the constant c is only represented in Equation (1.2). This shows that the alteration of the brightness effects anincorrect change in the color parts of the RGB color space, whereas only the Y part is affected in the YUV color space. Examinations of different color spaces have shown that therobustness can be further improved if the color parts are normalized and the weights arevaried. One of these color spaces, where this was applied, is the (YUV)‘color space, which is very similar to the YUV color space. The transformation from the RGB color space into the(YUV)‘color space is:⎪⎪⎪⎪⎪⎪⎪⎭⎫ ⎝⎛---=⎪⎪⎪⎪⎭⎫ ⎝⎛3212131310313212131'''V U Y ⎪⎪⎪⎭⎫ ⎝⎛B G R (1-8) The explanations show that the YUV color space should be preferred for object detection by the use of the color attribute if the computer-vision application has to deal with changes in the illumination.附录2中文翻译通常,一个是图像处理申请有5个步骤。