当前位置:文档之家› 图像处理-毕设论文外文翻译(翻译+原文)

图像处理-毕设论文外文翻译(翻译+原文)

英文资料翻译

Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.

Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.

With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for

the edge, and then analyzing and processing until the probably area of license plate is extracted.

The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.

The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.

It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.

A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to a

label image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.

You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.

There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.

In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.

Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.

Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.

Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed by

graphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.

A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.

中文翻译

图像处理不是一步就能完成的过程。可将它分成诸多步骤,必须一个接一个地执行这些步骤,直到从被观察的景物中提取出有用的数据。依据这种方法,一个层次化的处理方案,该图给出了图像处理不同阶段的概观。

图像处理首先是以适当的但不一定是光学的采集系统对图像进行采集。在技术或科学应用中,可以选择一个适当的成像系统。此外,可以建立照明系统,选择最佳波长范围,以及选择其他方案以便用最好的方法在图像中获取有用的对象特征。一旦图像被检测到,必须将其变成数字计算机可处理的形式,这个过程称之为数字化。

随着交通问题的日益严重,智能交通系统应运而生。汽车牌照自动识别系统是近几年发展起来的计算机视觉和模式识别技术在智能交通领域应用的重要研究课题之一。课题的目的是对摄像头获取的汽车图像进行预处理,确定车牌位置,提取车牌上的字符串,并对这些字符进行识别处理,用文本的形式显示出来。车牌自动识别技术在智能交通系统中具有重要的应用价值。在车牌自动识别系统中,首先要将车牌从所获取的图像中分割出来,这是进行车牌字符识别的重要步骤,定位准确与否直接影响车牌识别率。本文在对各种车辆图像处理方法进行分析、比较的基础上,提出了车牌预处理、车牌粗定位和精定位的方法,并且取得了较好的定位结果。车牌定位采取的是边缘检测的频率分析法。从经过边缘提取后的车辆图像中提取车牌特征,进行分析处理,从而初步定出车牌的区域,再利用车牌的先验知识和分布特征对车牌区域二值化图像进行处理,从而得到车牌的精确区域。

汽车牌照的自动定位是图像处理的一种,也是智能交通系统中的重要组成部分之一,是实现车牌识别(LPR)系统的关键。针对不同背景和光照条件下的车辆图像,提出了一种基于灰度图像灰度变化特征进行车牌定位的方法。依据车牌中字符的灰度变化以峰、谷规律分布确定车牌上下边界,对扫描行采用灰度跳变法确定车牌左右边界。

数字化处理的第一步包含了一系列不同的操作并被称之为图像处理。如果传感器具有非线性特性,就必须予以校正,同样,图像的亮度和对比度也需要改善。通常,还需要进行坐标变换以消除在成像时产生的几何畸变。辐射度校正和几何校正是最基本的像素处理操作。

在图像中,对已知的干扰进行校正也是不可少的,比如由于光学聚焦不准,运动模糊,传感器误差以及图像信号传输误差所引起的干扰。在此还要涉及图像重构技术,它需要许多间接的成像技术,比如不直接提供图像的X 射线断层技术等。

一套完整的处理步骤对于物体的分析和识别是必不可少的。首先,应该采用适当的过滤技术以便从其他物体和背景中将所感兴趣的物体区分出来。实质上就是从一幅图像(或者数幅图像)中抽取出一幅或几幅特征图像。要完成这个任务最基本的工具就是图像处理中所使用的求均值和边缘检测、简单的相邻像素分析,以及复杂的被称为材质描述的模式分析。物体的一个重要特性就是它的运动性。检测和确定物体运动性的技术是必不可少的。随后,该物体必须从背景中分离出来,这就意味着具有同样特性和不同特性的区域必须被识别出来。这个过程产生出标志图像。既然已经知道了物体精确的几何形状,就可以抽取诸如平均灰度值、区域、边界以及形成物体的其他参数等更多的信息。这些参数可用来对物体进行分类,这是许多图像处理应用中至关重要的一步,比如下面一些应用:在一个显示农业地区的卫星图像中,想要区别出不同的果树,并获取参数以估算出成熟情况并监测害虫情况;

在许多的医学应用中,最基本的问题是检查病理变化,最典型的应用就是染色体畸变分析;印刷体和手写体识别是另一个例子,图像处理一出现,人们就开始对它进行着研究,现在依然困难重重。

人们希望能了解得更多一些,也就是试图理解所读到的内容。这也是图像处理的最后一个步骤,即理解所观察到的景象。当我们使用视觉系统时,实际上已或多或少无意识地在执行这个任务。我们能识别不同的人,可以很轻易地区分出实验室和起居室,可以观察车流以便安全地穿行马路。我们完成这样的任务而并不了解视觉系统工作的奥秘。

长久以来,图像处理和计算机图形学被看做两个不同的领域。现在,人们在这两个领域中的知识都有了极大的提高,并可以解决许多复杂的问题。计算机图形学正在努力使三维景物的计算机图像达到照片级效果。而图像处理则试图对用照相机实际拍摄的图像进行重构。从这个意义上讲,图像处理完成的是与计算机图形技术相反的过程。但从有关物体的形状和特性知识开始,向上直到获得一个二维图像要运用图像处理和计算机图形技术,所用到

的基本知识都是一样的。我们需要了解物体和照明之间的相互关系,三维景物是如何投影到图像平面上的等有关知识。

图像处理和计算机图形工作站之间仍然有一些不同之处。但我们应该看到,一旦较好地理解了计算机图形技术和图像处理之间的相似性和相互关系,并开发出了适当的硬件系统,一些既可处理计算机图形,又可完成图像处理任务的通用工作站就会出现。多媒体的出现,即文字、图像、声音和电影的综合,将进一步加速计算机图形学和图像处理的统一。

1980年元月《科学美国人》发表了一幅被称之为“Plume 2”的著名图像,它是1979年3月5日通过宇宙飞船旅行者1号在木星的卫星上探测到的8次火山爆发中的第二次。这幅图像在星际探险图像中是一个里程碑,人们第一次在宇宙中看到了正在爆发的火山。它也是图像处理领域的一次伟大胜利。

卫星图像以及宇宙探测器所获取的图像直到近年来才大量应用图像处理技术。在这些技术中,对计算机图像进行数字化处理以得到想要获得的效果,比如使图像的某一部分或某一特性更加明显。

图像处理源自于二战中的摄影侦察。当时,处理操作是通过光学方法来完成的,判读工作则是由专门精于此道并能确定炸弹袭击结果的人员来做。随着20世纪60年代后期卫星图像的出现,更多基于计算机的工作便开展起来彩色合成的卫星图像,有时的确漂亮得让人吃惊,它们已经成为人类视觉文化和对我们这个行星进行认知的一个组成部分。

正如计算机图形学一样,直到近几年,图像处理仍局限在一些实验室里使用,只有这些地方才能提供昂贵的图像处理计算机来满足处理大量高分辨率图像的需要。随着价格低廉的高性能计算机和诸如数码相机及扫描仪这样的图像采集设备的出现,我们已经看到图像处理技术在向公众领域转移。经典的图像处理技术很平常地被图像设计人员用来处理图片和生成图像,比如修复缺陷,改变色彩等或者通过图像边缘增强这样的处理来改变整个图片外观。

目前图像处理的主流应用是图像的压缩,即通过互联网进行传递或在可视电话和视频会议中进行移动视频图像的压缩。可视电话是当今结合计算机图像和传统图像处理技术,以期产生很高压缩比的交叉领域之一。所有这一

切都是图像的数字表达这一不可抗拒的发展趋势的组成部分。事实上,20世纪最强大的图像形式——电视图像,也将不可避免地融入数字领域。

图像处理的特点是针对不同问题有大量不同的算法。有一些是应用于每一个像素的、数学的或不依赖上下文的运算,比如,可以使用傅里叶变换来完成图像滤波操作;还有一些则是算法上的一一可以在图像中使用复杂的递归策略找出构成边缘的那些像素。

图像处理操作通常形成计算机视觉系统的一部分。比如,在形状检测操作中输入图像可过滤成高光或显示图像边缘。在计算机视觉系统中.这些处理通常认为是低级操作在计算机图形技术中,过滤操作广泛地用于防止图像毛边或采样失真。

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献 (文档含中英文对照即英文原文和中文翻译) 原文: Application Of Digital Image Processing In The Measurement Of Casting Surface Roughness Ahstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parameters and the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.

图像处理-毕设论文外文翻译(翻译+原文)

英文资料翻译 Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing. Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization. With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for

图像检测外文翻译参考文献

图像检测外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)

译文 基于半边脸的人脸检测 概要:图像中的人脸检测是人脸识别研究中一项非常重要的研究分支。为了更有效地检测图像中的人脸,此次研究设计提出了基于半边脸的人脸检测方法。根据图像中人半边脸的容貌或者器官的密度特征,比如眼睛,耳朵,嘴巴,部分脸颊,正面的平均全脸模板就可以被构建出来。被模拟出来的半张脸是基于人脸的对称性的特点而构建的。图像中人脸检测的实验运用了模板匹配法和相似性从而确定人脸在图像中的位置。此原理分析显示了平均全脸模型法能够有效地减少模板的局部密度的不确定性。基于半边脸的人脸检测能降低人脸模型密度的过度对称性,从而提高人脸检测的速度。实验结果表明此方法还适用于在大角度拍下的侧脸图像,这大大增加了侧脸检测的准确性。 关键词:人脸模板,半边人脸模板,模板匹配法,相似性,侧脸。 I.介绍 近几年,在图像处理和识别以及计算机视觉的研究领域中,人脸识别是一个很热门的话题。作为人脸识别中一个重要的环节,人脸检测也拥有一个延伸的研究领域。人脸检测的主要目的是为了确定图像中的信息,比如,图像总是否存在人脸,它的位置,旋转角度以及人脸的姿势。根据人脸的不同特征,人脸检测的方法也有所变化[1-4]。而且,根据人脸器官的密度或颜色的固定布局,我们可以判定是否存在人脸。因此,这种基于肤色模型和模板匹配的方法对于人脸检测具有重要的研究意义[5-7]。 这种基于模板匹配的人脸检测法是选择正面脸部的特征作为匹配的模板,导致人脸搜索的计算量相对较大。然而,绝大多数的人脸都是对称的。所以我们可以选择半边正面人脸模板,也就是说,选择左半边脸或者有半边脸作为人脸匹配的模板,这样,大大减少了人脸搜索的计算。 II.人脸模板构建的方法 人脸模板的质量直接影响匹配识别的效果。为了减少模板局部密度的不确定性,构建人脸模板是基于大众脸的信息,例如,平均的眼睛模板,平均的脸型模板。这种方法很简单。 在模板的仿射变换的实例中,人脸检测的有效性可以被确保。构建人脸模板的过程如下[8]: 步骤一:选择正面人脸图像; 步骤二:决定人脸区域的大小和选择人脸区域; 步骤三:将选出来的人脸区域格式化成同一种尺寸大小;

图像去噪中英文对照外文翻译文献

中英文对照外文翻译文献 (文档含英文原文和中文翻译) Complex Ridgelets for Image Denoising 1 Introduction Wavelet transforms have been successfully used in many scientific fields such as image compression, image denoising, signal processing, computer graphics,and pattern recognition, to name only a few.Donoho and his coworkers pioneered a wavelet denoising scheme by using soft thresholding and hard thresholding. This approach appears to be a good choice for a number of applications. This is because a wavelet transform can compact the energy of the image to only a small number of large coefficients and the majority of the wavelet coeficients are very small so that they can be set to zero. The thresholding of the wavelet coeficients can be done at only the detail wavelet decomposition subbands. We keep a few low frequency wavelet subbands untouched so that they are not thresholded. It is well known that Donoho's method offers the advantages

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文 一、外文原文 MCU A microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC). With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges. The Description of AT89S52 The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications. The AT89S52 provides the following standard features: 8K bytes of

matlab图像处理 外文翻译 外文文献 英文文献 基于视觉的矿井救援机器人场景识别

附录A 英文原文 Scene recognition for mine rescue robot localization based on vision CUI Yi-an(崔益安), CAI Zi-xing(蔡自兴), WANG Lu(王璐) Abstract:A new scene recognition system was presented based on fuzzy logic and hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates. By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is, and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The contributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localization in both static and dynamic mine environments. Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model 1 Introduction Search and rescue in disaster area in the domain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and effectiveness, scene recognition becomes one of the important technologies of topological localization. Currently most scene recognition methods are based on global image features and have two distinct stages: training offline and matching online.

图像融合技术外文翻译-中英对照

***大学 毕业设计(英文翻译)原文题目:Automatic Panoramic Image Stitching using Invariant Features 译文题目:使用不变特征的全景图像自动拼接 学院:电子与信息工程学院 专业: ******** 姓名: ****** 学号:**********

使用不变特征的全景图像自动拼接 马修·布朗和戴维?洛 {mbrown|lowe}@cs.ubc.ca 计算机科学系 英国哥伦比亚大学 加拿大温哥华 摘要 本文研究全自动全景图像的拼接问题,尽管一维问题(单一旋转轴)很好研究,但二维或多行拼接却比较困难。以前的方法使用人工输入或限制图像序列,以建立匹配的图像,在这篇文章中,我们假定拼接是一个多图像匹配问题,并使用不变的局部特征来找到所有图像的匹配特征。由于以上这些,该方法对输入图像的顺序、方向、尺度和亮度变化都不敏感;它也对不属于全景图一部分的噪声图像不敏感,并可以在一个无序的图像数据集中识别多个全景图。此外,为了提供更多有关的细节,本文通过引入增益补偿和自动校直步骤延伸了我们以前在该领域的工作。 1. 简介 全景图像拼接已经有了大量的研究文献和一些商业应用。这个问题的基本几何学很好理解,对于每个图像由一个估计的3×3的摄像机矩阵或对应矩阵组成。估计处理通常由用户输入近似的校直图像或者一个固定的图像序列来初始化,例如,佳能数码相机内的图像拼接软件需要水平或垂直扫描,或图像的方阵。在自动定位进行前,第4版的REALVIZ拼接软件有一个用户界面,用鼠标在图像大致定位,而我们的研究是有新意的,因为不需要提供这样的初始化。 根据研究文献,图像自动对齐和拼接的方法大致可分为两类——直接的和基于特征的。直接的方法有这样的优点,它们使用所有可利用的图像数据,因此可以提供非常准确的定位,但是需要一个只有细微差别的初始化处理。基于特征的配准不需要初始化,但是缺少不变性的传统的特征匹配方法(例如,Harris角点图像修补的相关性)需要实现任意全景图像序列的可靠匹配。 在本文中,我们描述了一个基于不变特征的方法实现全自动全景图像的拼接,相比以前的方法有以下几个优点。第一,不变特征的使用实现全景图像序列的可靠匹配,尽管在输入图像中有旋转、缩放和光照变化。第二,通过假定图像拼接是一个多图像匹配问题,我们可以自动发现这些图像间的匹配关系,并且在无序的数据集中识别出全景图。第三,通过使用多波段融合呈现无缝输出的全景图,可以产生高质量的结果。本文通过引入增益补偿和自动校直步骤延伸了我们以前在该领域的工作,我们还描述了一个高效的捆绑调整实现并展示对任意数量波段的多个重叠图像如何进行多波段融合。 本文其余部分的结构如下。第二部分说明所研究问题的几何学和我们选择不变特征

CCD图像图像处理外文文献翻译、中英文翻译、外文翻译

附录 附录1 翻译部分 Raw CCD images are exceptional but not perfect. Due to the digital nature of the data many of the imperfections can be compensated for or calibrated out of the final image through digital image processing. Composition of a Raw CCD Image. A raw CCD image consists of the following signal components: IMAGE SIGNAL - The signal from the source.Electrons are generated from the actual source photons. BIAS SIGNAL - Initial signal already on the CCD before the exposure is taken. This signal is due to biasing the CCD offset slightly above zero A/D counts (ADU). THERMAL SIGNAL - Signal (Dark Current thermal electrons) due to the thermal activity of the semiconductor. Thermal signal is reduced by cooling of the CCD to low temperature. Sources of Noise CCD images are susceptible to the following sources of noise: PHOTON NOISE - Random fluctuations in the photon signal of the source. The rate at which photons are received is not constant. THERMAL NOISE - Statistical fluctuations in the generation of Thermal signal. The rate at which electrons are produced in the semiconductor substrate due to thermal effects is not constant. READOUT NOISE - Errors in reading the signal; generally dominated by the on-chip amplifier. QUANTIZATION NOISE - Errors introduced in the A/D conversion process. SENSITIVITY VARIATION - Sensitivity variations from photosite to photosite on the CCD detector or across the detector. Modern CCD's are uniform to better than 1%

FPGA图像处理中英文对照外文翻译文献

中英文资料对照外文翻译 (文档含英文原文和中文翻译) 基于FPGA的快速图像处理系统的设计 摘要 我们评估、改进硬件、软件架构的性能,目的是为了适应各种不同的图像处理任务。这个系统架构采用基于现场可编程门阵列(FPGA)和主机电脑。PC端安装Lab VIEW应用程序,用于控制图像采集和工业相机的视频捕获。通过USB2.0传输协议执行传输。FPGA控制器是基于ALTERA的Cyclone II 芯片,其作用是作为一个系统级可编程芯片(SOPC)嵌入NIOSII内核。该SOPC集成了CPU,片内、外部内存,传输信道,和图像数据处理系统。采用标准的传输协议和通过软硬件逻辑来调整各种帧的大小。与其他解决方案作比较,对其一系列的应用进行讨论。 关键词:软件/硬件联合设计;图像处理;FPGA;嵌入式

1、导言 传统的硬件实现图像处理一般采用DSP或专用的集成电路(ASIC)。然而,随着对更高的速度和更低的成本的追求,其解决方案转移到了现场可编程门阵列(FPGA)身上。FPGA具有并行处理的特性以及更好的性能。当一个程序需要实时处理,如视频或电视信号的处理,机械操纵时,要求非常严格,FPGA 可以更好的去执行。当需要严格的计算功能时,如滤波、运动估算、二维离散余弦变换(二维DCTs )和快速傅立叶变换(FFTs )时,FPGA能够更好地优化。在功能上,FPGA更多的硬件乘法器、更大的内存容量、更高的系统集成度,轻而易举地超越了传统的DSP。以计算机为基础的成像技术的应用和基于FPGA的并行控制器,这需要生成一个软硬件接口来进行高速传输。本系统是一个典型的软硬件混合设计产品,其中包括电脑主机中运行的LvbVIEW进行成像,配备了摄像头和帧采集,在另一端的Altera的FPGA开发板上运行图像滤波器和其他系统组件。图像数据通过USB2.0进行高速传输。各硬件部件和FPGA板的控制部分通过嵌入的NIOSII处理器进行关联,并利用USB2.0作为沟通渠道。 2、设计工具概述 通过FPGA设计DSP系统往往采用高级别算法开发工具和硬件描述语言,例如MATLAB。它也可采用具有第三方知识产权的IP内核执行典型的DSP

数字图像处理英文原版及翻译

Digital Image Processing and Edge Detection Digital Image Processing Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception. An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pixels, and pixels. Pixel is the term most widely used to denote the elements of a digital image. Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications. There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a

毕设外文翻译

河北建筑工程学院 毕业设计(论文)外文资料翻译 系别:电气系 专业:电气工程及其自动化 班级:电控062 姓名:李欣坡 学号: 30 外文出处:Atomation Professional English Course (用外文写) Pressed By Machinery Industry Press 附件:1、外文原文;2、外文资料翻译译文。 指导教师评语: 签字: 年月日注:请将该封面与附件装订成册。

1、 外文原文(复印件) A: Fundamentals of Single-chip Microcomputer Th e si ng le -ch i p mi cr oc om pu ter is t he c ul mi nat i on o f bo th t h e d ev el op me nt o f th e d ig it al com p ut er an d t he int e gr at ed ci rc ui t a r gu ab ly th e t ow m os t s i gn if ic ant i nv en ti on s o f t h e 20t h c en tury [1]. Th es e to w typ e s of a rc hi te ctu r e ar e fo un d i n s in gl e -ch i p m i cr oc om pu te r. So m e em pl oy t he sp l it p ro gr am /d ata me mo ry o f th e H a rv ar d ar ch it ect u re , sh ow n in Fi g.3-5A-1, o th ers fo ll ow t he p h il os op hy , wi del y a da pt ed f or ge n er al -p ur po se co m pu te rs a nd m i cr op ro ce ss or s, of ma ki ng no lo gi c al di st in ct io n be tw ee n p ro gram a n d da ta m em or y a s i n th e Pr in cet o n ar ch it ec tu re, sh ow n in F i g.3-5A -2. In g en er al te r ms a s in gl e -chi p m ic ro co mp ut er i s c h ar ac te ri ze d b y th e i nc or po ra tio n o f al l t he uni t s o f a co mp ut er i n to a s in gl e dev i ce , as s ho wn in Fi g3-5A -3. Fig.3-5A-1 A Harvard type Fig.3-5A-2. A conventional Princeton computer Program memory Data memory CPU Input& Output unit memory CPU Input& Output unit

数字图像处理论文中英文对照资料外文翻译文献

中英文对照资料外文翻译文献 原文 To image edge examination algorithm research Abstract :Digital image processing took a relative quite young discipline, is following the computer technology rapid development, day by day obtains the widespread application.The edge took the image one kind of basic characteristic, in the pattern recognition, the image division, the image intensification as well as the image compression and so on in the domain has a more widespread application.Image edge detection method many and varied, in which based on brightness algorithm, is studies the time to be most long, the theory develops the maturest method, it mainly is through some difference operator, calculates its gradient based on image brightness the change, thus examines the edge, mainly has Robert, Laplacian, Sobel, Canny, operators and so on LOG。 First as a whole introduced digital image processing and the edge detection survey, has enumerated several kind of at present commonly used edge detection technology and the algorithm, and selects two kinds to use Visual the C language programming realization, through withdraws the image result to two algorithms the comparison, the research discusses their good and bad points. Foreword:In image processing, as a basic characteristic, the edge of theimage, which is widely used in the recognition, segmentation,intensification and compress of the image, is often applied tohigh-level domain.There are many kinds of ways to detect the edge. Anyway, there aretwo main techniques: one is classic method based on the gray grade ofevery pixel; the other one is based on wavelet and its multi-scalecharacteristic. The first method, which is got the longest research,get the edge according to the variety of the pixel gray.

在线图像编码识别外文翻译文献

在线图像编码识别外文翻译文献(文档含中英文对照即英文原文和中文翻译)

外文: The Development of A Kind of Online Image Code Recognition System Abstract: This paper describes the design and the implement of online image coding char recognition system. It analyses and researches the important contents about the system. Then it provides the solutions of main problems. In recognition algorithm, combining template matching with feature recognition, it put forword an improved template matching algorithm based on feature weights. The algorithm can obviously improve the char recognition ratio. Keyword : image processing; pattern recognition; feature weights; software design 0 Introduction Character recognition of image coding is still the subject of intense study at home and abroad, it has broad applications, such as Automatic number plate recognition, postal code of the automatic identification, automatic reading papers, reports, automatic processing, because of this online image coded character recognition has some common, this paper online tire coding character recognition system for the general image coding character recognition system has been elaborated on the key link of the research and analysis, the method of the other online image coded character system Development of guiding significance. 1 An online image coding identification system processes Online image coding character recognition system includes digital image capture, storage, image preprocessing, encoding the image extraction, feature extraction coding, coding identification and follow-up treatment of some aspects of its flow chart shown in Figure 1. Image Follow-up Feature Extraction Coded image capture

相关主题
文本预览
相关文档 最新文档