Automatic Detection of Region-Mura defect in TFT-LCD
- 格式:pdf
- 大小:1.44 MB
- 文档页数:8
面向跨区域场景的无监督域自适应行人重识别在人工智能的广阔天空中,行人重识别技术如同一颗璀璨的星辰,它通过深度学习算法,对不同摄像头下的行人进行匹配和追踪。
然而,当这项技术跨越地域界限时,却面临着巨大的挑战。
不同区域间的环境差异、光照变化和行人姿态多样性,使得行人特征难以统一描述和匹配。
这就好比是在一个五彩斑斓的调色板上,试图找到一种颜色来描绘所有的景象,无疑是一项艰巨的任务。
为了克服这一难题,研究人员提出了无监督域自适应的方法。
这种方法的核心思想是让模型能够自动适应不同域之间的差异,从而实现跨区域的行人重识别。
这就像是给模型装上了一副“变色眼镜”,使其能够根据不同的环境自动调整自己的“视角”。
无监督域自适应方法的优势在于无需额外的标注数据,可以直接利用源域和目标域的数据进行训练。
这不仅大大减少了人工标注的成本,也提高了模型的泛化能力。
此外,这种方法还可以有效缓解源域和目标域之间数据分布不一致的问题,提高行人重识别的准确性。
然而,无监督域自适应并非万能钥匙,它也面临着一些挑战。
首先,由于缺乏目标域的标注信息,模型很难准确判断自己在目标域上的性能表现。
这就好比是在黑暗中摸索前行,虽然方向明确,但步伐却可能跌跌撞撞。
其次,不同域之间的差异可能非常巨大,这就要求模型具有足够的鲁棒性和适应性。
这就像是要求一个习惯于平原生活的人突然去适应高原的环境,无疑需要时间和努力。
展望未来,无监督域自适应行人重识别技术的发展将更加注重模型的鲁棒性和适应性。
一方面,我们可以借助更先进的深度学习算法和更大规模的数据集来提升模型的性能;另一方面,我们也可以通过改进训练策略和优化模型结构来增强模型的泛化能力。
同时,我们也期待更多的研究人员关注这一领域,共同推动无监督域自适应行人重识别技术的发展。
总之,面向跨区域场景的无监督域自适应行人重识别技术是一项充满挑战和机遇的研究课题。
它不仅需要我们具备深厚的专业知识和技术实力,更需要我们具备敏锐的洞察力和不懈的探索精神。
Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x,y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultrasound, electron microscopy, and computer generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition,even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low, mid, and highlevel processes. Low-level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognize d objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some formof organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.Fig1Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image.A familiar example of enhancement is when we increase the contrast of an imagebecause “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” en hancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.F ig2Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storagetechnology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the pro cess that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital imageprocessing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal b lur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problemof detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking outirrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem insegmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.数字图像处理与边缘检测数字图像处理数字图像处理方法的研究源于两个主要应用领域:其一是改进图像信息以便于人们分析;其二是为使机器自动理解而对图像数据进行存储、传输及显示。
人工智能土壤检测python代码以下是一个简单的人工智能土壤检测的Python代码示例:```pythonimport numpy as npimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalerfrom sklearn.svm import SVCfrom sklearn.metrics import classification_report# 读取土壤数据集data = pd.read_csv("soil_dataset.csv")# 划分特征和标签X = data.iloc[:, :-1].valuesy = data.iloc[:, -1].values# 数据集划分为训练集和测试集X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# 特征归一化scaler = StandardScaler()X_train = scaler.fit_transform(X_train)X_test = scaler.transform(X_test)# 构建SVM模型svm = SVC()svm.fit(X_train, y_train)# 预测并评估模型y_pred = svm.predict(X_test)print(classification_report(y_test, y_pred))```请注意,上述代码仅为示例,实际上的土壤检测任务可能需要更复杂的特征工程、模型选择和调参等步骤。
你需要根据具体的数据集和任务需求进行相应的调整和优化。
此外,你需要准备一个合适的土壤数据集(soil_dataset.csv)来进行训练和测试。
集成梯度特征归属方法-概述说明以及解释1.引言1.1 概述在概述部分,你可以从以下角度来描述集成梯度特征归属方法的背景和重要性:集成梯度特征归属方法是一种用于分析和解释机器学习模型预测结果的技术。
随着机器学习的快速发展和广泛应用,对于模型的解释性需求也越来越高。
传统的机器学习模型通常被认为是“黑盒子”,即无法解释模型做出预测的原因。
这限制了模型在一些关键应用领域的应用,如金融风险评估、医疗诊断和自动驾驶等。
为了解决这个问题,研究人员提出了各种机器学习模型的解释方法,其中集成梯度特征归属方法是一种非常受关注和有效的技术。
集成梯度特征归属方法能够为机器学习模型的预测结果提供可解释的解释,从而揭示模型对于不同特征的关注程度和影响力。
通过分析模型中每个特征的梯度值,可以确定该特征在预测中扮演的角色和贡献度,从而帮助用户理解模型的决策过程。
这对于模型的评估、优化和改进具有重要意义。
集成梯度特征归属方法的应用广泛,不仅适用于传统的机器学习模型,如决策树、支持向量机和逻辑回归等,也可以应用于深度学习模型,如神经网络和卷积神经网络等。
它能够为各种类型的特征,包括数值型特征和类别型特征,提供有益的信息和解释。
本文将对集成梯度特征归属方法的原理、应用优势和未来发展进行详细阐述,旨在为读者提供全面的了解和使用指南。
在接下来的章节中,我们将首先介绍集成梯度特征归属方法的基本原理和算法,然后探讨应用该方法的优势和实际应用场景。
最后,我们将总结该方法的重要性,并展望未来该方法的发展前景。
1.2文章结构文章结构内容应包括以下内容:文章的结构部分主要是对整篇文章的框架进行概述,指导读者在阅读过程中能够清晰地了解文章的组织结构和内容安排。
第一部分是引言,介绍了整篇文章的背景和意义。
其中,1.1小节概述文章所要讨论的主题,简要介绍了集成梯度特征归属方法的基本概念和应用领域。
1.2小节重点在于介绍文章的结构,将列出本文各个部分的标题和内容概要,方便读者快速了解文章的大致内容。
基于多级核密度估计的异常数据检测算法随着大数据时代的到来,数据变得越来越复杂和庞大,并且包含了大量的异常数据(Outlier),这些数据与正常数据的特征不同,可能是噪声、欺诈、错误或者是其他未知的原因造成的。
异常数据的存在会对数据分析和挖掘带来一定的干扰和误导,因此异常数据的检测与处理是数据预处理的重要任务。
目前,异常数据检测算法主要分为基于模型的方法和非参数方法。
基于模型的方法包括统计模型、机器学习模型、聚类模型等,这些方法通常要求提前假设数据的概率分布或者确定一些预设模型参数,但是很难应对数据分布的复杂性和未知性。
而非参数方法通常不需要事先假设分布模型和参数,具有更强的适应性和鲁棒性。
本文将介绍一种基于多级核密度估计的异常数据检测算法。
该算法将多个核密度估计结果进行级联,通过计算不同层级之间的密度变化大小和方向,探测出异常数据的存在。
该算法无需任何假设条件或者参数设置,可以适用于不同分布形态的数据集,并且具有较高的准确性和鲁棒性。
一、多级核密度估计基本原理核密度估计是一种非参数方法,通常用来描述数据的概率密度函数。
在一维数据集上,核密度函数可以表示为:$$\hat {f}_{h}(x)={\frac {1}{nh}}\sum _{i=1}^{n}K\left({\frac {x-x_{i}}{h}}\right)$$其中,$K$是核函数,$h$是带宽参数。
核函数通常是对称的、正定、在定义域内积分为$1$的函数,比如高斯函数和Epanechnikov函数。
带宽参数控制了核函数在每个数据点的“宽度”,影响了核密度估计结果的平滑程度和精度。
为了将核密度估计从一维数据推广到多维数据,可以采用多维核密度估计方法。
在二维数据集上,多维核密度函数可以表示为:$$\hat {f}_{h}(\textbf{x})={\frac {1}{nh^{d}}} \sum _{i=1}^{n}K\left({\frac {\textbf{x}-\textbf{x}_{i}}{h}}\right)$$其中,$\textbf{x}$是一个$d$维向量,$h$是$d$个带宽参数的向量,$d$是数据的维数。
Journal of Computer and Communications, 2020, 8, 23-30https:///journal/jccISSN Online: 2327-5227 ISSN Print: 2327-5219Automated Landform Classification of China Based on Hammond’s MethodBaoying YeSchool of Land Science and Technology, China University of Geosciences, Beijing, ChinaAbstractThe automatic classification of Macro landforms was processed with the pro-gram developed by Hammond’s Manual procedures, which based on proper-ties of slope, local relief, and profile type, which consists of 5 landform types, 24 landform class and 96 landform subclasses. This program identified land-form types by moving a square window with size of 9.8 km × 9.8 km. The da-ta includes 816 sheets of topological map with a scale of 1:250,000. The DEM were buildup with the contours and mark points based on this data with a cell size of 200 m, and merge into one sheet. The automated classification was processed on this DEM data with a AML program of ArcGIS 10.X Worksta-tion. The result indicates it produced a classification that has good resemblance to the landforms in China. The maps were produced respectively with 5 types, 16 classes and, 90 subclasses The 5 Landform types of landforms were Plains (PLA), 20.25% of whole areas; Tablelands (TAB) of 3.56%; Plains with Hills or Mountains (PHM) of 32.84%; Open Hills and Mountains (OHM) of 18.72%; H ills and Mountains(H M) of 24.63%. In the result of 24 landform classes, there are not some classes, such as irregular plains with low relief; open very low hills, open low hills; very low hills, low hills, moderate hills. The result of 96 landform subclass is similar to the 24 class.KeywordsLandform Classification, Hammond, DEM1. IntroductionTo some degree, landforms influence the distribution and evolution of ecology and other environmental factor, which is the core and the basic content of geography [1]. Landform morphological classification is the basic unit of landform, and al-so the first step in solving geomorphic problems. The landform classifications of large scale were started in 1950 in China. In 1956, the 1:4,000,000 Landform Clas-H ow to cite this paper: Ye, B.Y. (2020) Automated Landform Classification of China Based on H ammond’s Method. Journal of Computer and Communications , 8, 23-30. https:///10.4236/jcc.2020.86003Received: June 1, 2020 Accepted: June 26, 2020 Published: June 29, 2020B. Y. Yesifications and Region Planning Map of China, according to the altitude and sur-face cutting degree (Table 1). In 1979, the Mapping Standard of 1:1,000,000Landform Classifications in China were completed, and classified the landformtypes with the altitude, relative altitude and the surface cutting degree, accordingto the classification schemes of З.A.Cварицевская (1975). Until 1989, only 15sheets landform maps (1:1,000,000 scale) were completed. This mission was sus-pended for a long time. Until 2009, the 1:1,000,000 scale landform atlases ofwhole China is accomplished [2]. The two landform classifications schemes above,is based on manual process.The 1:40,000,000’s scheme is based on forms and exogenic forces, and manyparameters are not quantitative. There were many quantitative factors is introducedinto the 1:1,000,000’s scheme, such as altitude, local relief, and slope. The localrelief is classified into 4 classless, less than 500 m is low relief hills; 500 - 1000 mis moderate relief hills, 1000 - 2500 m is high relief mountains and more than2500 m is very high relief mountains [3]. There are also some papers adoptedlocal reliefs but different classes in whole China’s landform scheme. Cai Zongxin(1986) classified grade into 5 classes, less than 20 m is plains; 20 - 200 m is hills,200 - 500 is low mountains, 500 - 1500 m is middle mountains and more than1500 m is high mountains (Table 2) [3]. Tu Hanming et al. [4] classified localrelief of China into 7 classes based on the statistics of samples from whole China’sDEMs. In 2009, Zhou Chenghu et al., classified the landform of China into 7types and 25 classes, according to slope, relief and altitude (Table 3).In 1990’s, there are some scholars contributing to extracting the single landformparameters in China, such as ridge line and valley line [5] [6] [7], summit [8],shoulder line of valleys [9] [10], micro topography [11]. All above are based onthe regions of simple landforms evolutions. There are many limits to automaticallywhole China’s landform classifications. Liu Aili et al. (2006) [12] attempted toautomate classify the landforms of whole China based on image classificationsmethods. But the sampling cell is 1000 m × 1000 m, which is coarse enough toomit many small landform units.Table 1. Mountain and hills classification of China.Class Subclass Altitude(m) Surface cutting degreeExtremely high mountain >5000 >1000High mountainHigh mountain3500 - 5000>1000 Mid-high mountain 500 - 1000 Low-high mountain <500Middle mountain High-middle mountain1000 - 3500>1000 Middle mountain 500 - 1000 Low-middle mountain <500Low mountain Mid-low mountain500 - 1000500 - 1000 Low mountain 100 - 500Hills <500B. Y. Ye Table 2. The basic geomorphologic index of China.Types Relative altitudePlain <20Hills 20 - 200Low mountain 200 - 500Middle mountain 500 - 1500High mountain >1500Table 3. Basic morphological types of land geomorphology in China.Altitude Low altitude Mid-altitude High altitude Extremely highaltitude relief <1000 1000 - 3500 3500 - 5000 >5000Plain (<30) Low altitude plain Mid-altitudeplainHigh altitudeplainExtremely highaltitude plainPlatform > 30 Low altitudeplatform Mid-altitudeplatformHigh altitudeplatformExtremely highaltitude platformHills < 200 Low altitude hills Mid-altitude hills High altitude hills Extremely high altitude hillsSmall-relief mountain 200 - 500 Small-relief lowmountainSmall-reliefmid-mountainSmall-relief highmountainSmall-reliefExtremely highmountainMid-relief mountain 500 - 1000 Mid-relief lowmountainMid-reliefmid-mountainMid-reliefhigh mountainMid-reliefExtremely highmountainBig-relief mountain 1000 - 2500Big-reliefmid-mountainBig-reliefhigh mountainBig-reliefExtremelyhigh mountainExtremelyBig-relief mountain > 2500ExtremelyBig-reliefhigh mountainExtremelyBig-relief Extremelyhigh mountainIn this paper, we classified the landform of whole China in Hammond’s scheme according of slope, local relief, and profile type [13] [14]. We compare the result with and the scheme by Zhou Chenghu et al. (2009) [2]. The computer-program is based on the approach developed by Dikau et al. [15]. In order to compare with the international landform maps, the parameters of Hammond’s scheme are kept unchanged.2. Hammond Landform Classification2.1. ConceptHammond’s hierarchic landform classification is based on properties of slope, localrelief, and profile type.1) The slope is divided into 4 levels based on the percent of area gently sloping. If the inclination is below 8%, we call this gently slope (Figure 1). The percent area is calculated in moving widow (9.8 km × 9.8 km).B. Y. YeFigure 1.% area local gently sloping (4 × 4).A: 31.25%, B: 18.75%, C: 37.5%, D: 12.5%.2) Local relief is the difference between maximum and minimum elevation inmoving window. Local relief had a non-linear relationship with horizontal lengthby examining a variety of mountain belts [16]. Tu Hanming et al. [4]-[17] calcu-lated the length scale with the sampling data from the whole land China, 5 opti-mum statistical length was calculated corresponding to different map scale, whichis 2, 6, 16, 20, 22 (km2). In this paper, we choose the 9.8 km × 9.8 km in order tocompare with the Hammond’s classification.3) Profile type subdivide tablelands as upland units and plains with hills ormountains as lowland unit [15].With these three parameters, Hammond classified 96 landform subclasses theo-retically (Table 4, Table 5). Hammond used only 45 subclasses were common inU.S. [18]. He generalized his results by merging areas smaller than 2072 km2 intoadjacent units to avoid cluttering at a 1:5,000,000 map. Dikau et al. [15] devel-oped automated approach identified all 96 landforms units without generaliza-tion.2.2. MethodThe data were processed in ArcGIS 10.x Workstation with 64 bit windows OS inHp xw8400. The Python and ARC/INFO AML were the scripting languages forbatching the data. The procedures mainly include two steps, the DEM buildupand automated classification:The DEM buildup:The contours and mark points features were extractedfrom the terrain layer. For eliminating the boundary effect, 16 sheets merge intoone map before generation of DEM, then clipping the DEM with the boundaryof one sheet. The whole China consists of 61 maps with a scale of 1:1,000,000.The DEM were buildup with the contours and mark points with ARC/INFOcommand of “generate <>”, and merge into one sheet with 100 m.Automated classification: The DEM were resampled into 200m.The movingwindow is 49 × 49 (9.8 km × 9.8 km). The three parameter layers were derivedfrom DEM firstly, and then they were overplayed to generate one 96-subclasseslandform map. A AML was developed according to the Dikau’s approach. Wemerged the three parameter layers to yield a landforms map.B. Y. YeTable 4. Hammond’s landform classification.Percent of area gently sloping Local relief Profile type1) more than 80 1. 0 - 30 1. >75% in lowland2) 50 - 80 2. 30 - 91 2. 50% - 75% in lowland3) 20 - 50 3. 91 - 152 3. 25% - 50% in lowland4) less than 20 4.152 - 305 4. <25% in lowland5. 305 > 9146. 5 > 914Table 5. The landform classifications of China.Landform Class Subclass5 types area% 24 classes area% 96 subclasses area%Plains (PLA) 20.25 flat or nearly flat plains 10.86 111, 112, 113, 114 3.41 3.15 2.64 1.67smooth plains with some local relief 9.37 121, 122, 123, 124 4.78 2.51 1.52 0.56irregular plains with moderate relief 0.02 221, 222, 223, 224 0.02 0.01tablelands (TAB) 3.56 tablelands with moderate relief 1.34 133, 134, 233, 234 1.04 0.27 0.02tablelands with considerable relief 1.50 143, 144, 243, 244 0.77 0.22 0.38 0.13tablelands with high relief 0.70 153, 154, 253, 254 0.10 0.05 0.37 0.19tablelands with very high relief 0.03 163, 164, 263, 264 0.01 0.01 0.01plains with hills or32.84 plains with hills 7.25 131, 132, 231, 232 4.73 2.17 0.20 0.15 mountains (PHM)plains with high hills 12.64 141, 142, 241, 242 7.10 1.89 2.84 0.80plains with low mountains 12.45 151, 152, 251, 252 3.19 0.29 8.04 0.93plains with high mountains 0.50 161, 162, 261, 262 0.04 0.00 0.46 0.01Open hills and18.72 open high hills 1.14 341, 342, 343, 344 0.44 0.41 0.24 0.05 mountains (OPM)open low mountains 14.85 351, 352, 353, 354 10.37 2.53 1.34 0.61open high mountains 2.73 361, 362, 363, 364 2.25 0.19 0.12 0.16Hills and24.63 low mountains 7.10 451, 452, 453, 454 3.73 2.08 0.99 0.30 mountains (HMO)high mountains 17.52 461, 462, 463, 464 7.29 5.19 3.27 1.783. Study Area and DataThis automated process was tested on almost whole China, which consists ofmainland, Hainan and Taiwan islands. The data includes 816 sheets of topologi-cal map with a scale of 1:250,000, which were digitalized by National GeometricsCenter of China in 1998. The content consists of 14 layers: hydrological system,Residential, Railway, Road, boundary, Terrain, and some auxiliary ones. The ter-rain data include contours and mark point, and the contours interval is 50 or100 m.B. Y. Ye4. Result and AnalysisThe maps were constructed respectively with 5 types, 16 classes and 90 subclasses(Table 2, Figure 2, Figure 3). The whole area of China is 9482552.72 km2 be-sides some small island were not calculated. The 5 Landform types of landformswere Plains (PLA), 20.25% of whole areas; Tablelands (TAB) of 3.56%; Plainswith Hills or Mountains (PHM) of 32.84%; Open Hills and Mountains (OHM)of 18.72%; H ills and Mountains (H M) of 24.63%. The PLA were located inSongnen Plain, Sanjiang Plain, Huabei Plain, Huaihai Plain, Jianghai Plain, Ale-tai Basin, Talimu Basin, Loess Plateau, etc. The TAB were scattered in wholeChina, which each patch is small. The PHMs were located in Xiao-Xing’anlingMountains, Shandong peninsula, Inner-Mongolian, Qinghai-Tibet Plateau, SichuanBasin, Guangxi and H unan province. The OH M were located in Da-Xing’anlingMountains, Shaanxi province, Guizhou province and scatted in North of TibetPlateau. The HMO is located in East of Tibet Plateau, around the Sichuan Basin,Yunnan, Fujian Taiwan province. The result indicates it produced a classificationthat has good resemblance to the landforms in China.Some classes were not generated, such as irregular plains and low hill. ThePLA is primary flat or smooth without some relief. The altitude in hill or moun-tain region is high, so there are almost not low hill.According to Hammond’s scheme, the area of TAB is only 3.56%. The area oftableland in some manual scheme is much more than that [19]. There are severallarge tablelands, such as Qinghai-Tibet Plateau, Mongolia Plateau, Loess Plateau,Figure 2. 5-type landforms map of China land.B. Y. YeFigure 3. 24-classes landforms map of China land.Yun-gui Plateau. In Figure 2, Qinghai-Tibet Plateau is mainly classified into PHM; Mongolia Tableland and Loess Tableland is classified into PLA or PHM and the Yun-gui Tableland is classified into HMO. There are many hills or moun-tains in tableland in China. The basin is basically classified into PLA, but the Si-chuan Basin is mainly classified into PHM or PLA.5. ConclusionAutomated landform classification produced a classification that has good resem-blance to those of manual approach. However, some classes are different from manual method. There are much more complex landform in China, and the geo-morphologic evolution is much more different, so it needs to improve the me-thod to classified more reasonable. Furthermore, the effects of scale and genera-lization also should be paid special attention.Conflicts of InterestThe author declares no conflicts of interest regarding the publication of this pa-per.B. Y. YeReferences[1]Yan, S.X. (1985) Geomorphology. Shanghai High Education Press.[2]State Key Laboratory of Resources and Environmental Information System (2009).[3]Su, S.Y. and Li, J.Z. (1998) Geomorphology Mapping.[4]Tu, H.M. and Liu, Z.D. (1991) Study on Amplitude in China. Acta Geodaetica etCartographica Sinica, 20, 311-319.[5]Liu, Z.H. and Huang, P.Z. (2003) Derivation of Skeleton Line from TopographicMap with DEM Data. Science of Surveying and Mapping, 28, 33-38.[6]Jin, H.L., Gao, J.X. and Kang, J.R. (2005) A Study of Extracting Terrain FeatureLines Based on Vector Contour Data. Bulletin of Surveying and Mapping, 67, 54-55.[7]Qu, J.H., Cheng, J.L. and Cui, X.G. (2007) Automatic Extraction for Ridge and Val-ley by Vertical Sectional Method. Science of Surveying and Mapping, 32, 33-34.[8]Chen, P.P., Zhang, Y.S., Wang, C., et al. (2006) Method of Extracting Surface PeaksBased on DEM. Modern Surveying and Mapping, 29, 11-13.[9]Lu, G.N., Qian, Y.D. and Chen, Z.M. (1998) Study of Automated Extraction OfShoulder Line of Valley from Grid Digital Elevation Data. Scientia Geographica Si-nica, 18, 567-573.[10]Liu, P.J., Zhu, Q.K., Wu, D.L., et al. (2006) Automated Extraction of Shoulder Lineof Valleys Based on Flow Paths from Grid Digital Elevation Model (DEM) Data.Journal of Beijing Forestry University, 28, 72-75.[11]Zhou, F.B. and Liu, X.J. (2008) Research on the Automated Classification of MicroLandform Based on Grid DEM. Journal of Wuhan University of Technology(In-formation & Management Engineering), 30, 172-175.[12]Liu, A.L. and Tang, G.A. (2006) DEM Based Auto-Classification of Chinese Land-form. Geo-Information Science, 8, 8-14.[13]Hammond, E.H. (1954) Small-Scale Continental Landform Maps. Annals of the Asso-ciation of American Geographers, 44, 33-42.https:///10.1080/00045605409352120[14]Hammond, E.H. (1964) Analysis of Properties in Land Form Geography: An Ap-plication to Broad-Scale Land form Mapping. Annals of the Association of Ameri-can Geographers, 54, 11-19. https:///10.1111/j.1467-8306.1964.tb00470.x[15]Dikau, R., Brabb, E.E. and Mark, R.M. (1991) Landform Classification of New Mexicoby Computer. U.S. Geological Survey, Menlo Park, CA, Open-File Report 91-634.https:///10.3133/ofr91634[16]Ahnert, F. (1984) Local Relief and the Height Limits of Mountain Ranges. AmericanJournal of Science, 284, 1035-1055. https:///10.2475/ajs.284.9.1035[17]Tu, H.M. and Liu, Z.D. (1990) Demonstrating on Optimum Statistics Unit of ReliefAmplitude in China. Journal of Hubei University (Natural Science), 20, 311-319.[18]Brabyn, L. (1998) GIS Analysis of Macro Landform. Presented at the 10th Ann. Col-loquium Spatial Information Research Centre University of Otago./wfass/subjects/geography/staff/lars/landform/sirc98.html[19]Chen, Z.M. (1993) 1:4,000,000 Geomorphologic Map of China and Its Adjacent Area.China Map Press.。
基于迁移学习的高速公路交织区车辆轨迹预测
殷子健;徐良杰;刘伟;马宇康;林海
【期刊名称】《深圳大学学报(理工版)》
【年(卷),期】2024(41)1
【摘要】高速公路交织区复杂场景下的车辆轨迹预测对智能汽车的决策与控制具有重要意义.为应对交织区复杂交通流带来的轨迹预测实时性与精确性等挑战,提出一种基于迁移学习的车辆轨迹预测方法,利用已有的高速公路直线段轨迹预测模型进行迁移学习训练,从而实现在交织区场景中更快速精准地轨迹预测.使用NGSIM(next generation simulation)数据集中的交织区轨迹数据,采用长短时记忆神经网络模型,在已充分训练的高速公路直线段模型基础上对交织区进行迁移学习,并采用时间序列滚动预测法逐帧精准预测轨迹.实验结果表明,横向和纵向行为预测准确率可达98.35%和93.01%,轨迹预测值的均方根误差为2.04 cm.交织区迁移学习能够缩短61.1%的模型训练时间,同时提高预测准确率和模型泛化能力.
【总页数】9页(P92-100)
【作者】殷子健;徐良杰;刘伟;马宇康;林海
【作者单位】武汉理工大学交通与物流工程学院;武汉大学国家网络安全学院【正文语种】中文
【中图分类】U491.2;TP242.6
【相关文献】
1.基于车辆限界的高速公路交织区交通安全评价研究
2.基于WNN的隧道交织区车辆换道持续距离预测
3.基于卷积神经网络的改进高速公路交织区速度预测模型
4.基于长短期记忆网络的高速公路车辆变道轨迹预测模型
5.基于注意力Seq2Seq网络的高速公路交织区车辆变道轨迹预测
因版权原因,仅展示原文概要,查看原文内容请购买。
基于机器学习的高光谱遥感图像目标识别研究高光谱遥感图像是通过获取目标物体在不同波段的反射能力所得到的一种遥感数据。
它具有丰富的光谱信息,可以提供更多的目标特征,因此在目标识别和分类方面具有广泛的应用。
随着机器学习的快速发展和高光谱技术的改进,利用机器学习方法进行高光谱遥感图像目标识别成为了热门的研究领域。
在高光谱遥感图像目标识别的研究中,机器学习方法被广泛应用。
机器学习旨在通过训练算法从已知数据中学习出模式和规律,并将其应用于未知数据的预测和分类。
基于机器学习的高光谱遥感图像目标识别研究主要包括以下几个方面的内容。
首先,数据处理是进行高光谱遥感图像目标识别的重要步骤。
高光谱图像数据包含大量的光谱波段,每个波段都包含丰富的光谱信息。
为了能够更好地进行目标识别,我们需要对数据进行预处理和降维。
预处理包括去除噪声、校正和均衡化等步骤,以提高图像的质量。
降维可以通过主成分分析(PCA)、线性判别分析(LDA)等方法实现,将高维的光谱数据转换为低维的特征向量,以便进行后续的分类处理。
其次,特征提取是高光谱遥感图像目标识别的关键环节。
特征提取旨在从原始图像数据中提取出能够反映目标特征的有效信息。
传统的特征提取方法包括像素级特征、统计特征和频域特征等。
像素级特征基于像素的灰度或颜色值来描述目标,统计特征基于图像的纹理、形状和灰度分布等统计信息来描述目标,频域特征基于图像的频率信息来描述目标。
近年来,随着深度学习的兴起,卷积神经网络(CNN)被广泛应用于高光谱遥感图像目标识别中,能够提取出更加丰富和表征力强的特征。
然后,基于机器学习的高光谱遥感图像目标识别主要采用监督学习和无监督学习方法进行分类。
监督学习方法是在已标记的训练样本上进行学习,然后使用学习得到的模型对未知数据进行分类。
其中常用的监督学习算法包括支持向量机(SVM)、决策树、随机森林和深度学习等。
无监督学习方法是在未标记的数据上进行学习,通过对数据的分布特性进行建模来实现目标分类。
基于自监督特征提取的骨骼X线影像异常检测方法张雨宁;阿布都克力木·阿布力孜;梅悌胜;徐春;麦尔达娜·买买提热依木;哈里旦木·阿布都克里木;侯钰涛【期刊名称】《计算机应用》【年(卷),期】2024(44)1【摘要】为探索自监督特征提取方法在骨骼X线影像异常检测方面的可行性,提出了基于自监督特征提取的骨骼X线影像异常检测方法。
将自监督学习框架与ViT(Vision Transformer)模型结合用于骨骼异常检测的特征提取,并通过线性分类器进行异常检测分类,在特征提取阶段可有效避免有监督模型对大规模有标注数据的依赖性。
在公开的骨骼X线影像数据集上进行实验,采用准确率分别评估预训练的卷积神经网络(CNN)和自监督特征提取的骨骼异常检测模型。
实验结果表明,自监督特征提取模型相较于一般的CNN模型效果更优,在7个部位分类结果与有监督的CNN模型ResNet50相差无几,但在肘部、手指、肱骨的异常检测中准确率均取得了最优值,平均准确率提升了5.37个百分点。
所提方法易于实现,可以作为放射科医生初步诊断的可视化辅助工具。
【总页数】7页(P175-181)【作者】张雨宁;阿布都克力木·阿布力孜;梅悌胜;徐春;麦尔达娜·买买提热依木;哈里旦木·阿布都克里木;侯钰涛【作者单位】新疆财经大学信息管理学院;新疆财经大学统计与数据科学学院;陕西中医药大学第一临床医学院【正文语种】中文【中图分类】TP391.1【相关文献】1.基于半监督稀疏流形嵌入的高光谱影像特征提取2.基于核半监督判别分析的高光谱影像特征提取3.基于核线影像的点特征提取与图像匹配方法比较研究4.基于多域特征提取的电力数据异常检测方法5.基于二次特征提取和BiLSTM-Attention 的网络流量异常检测方法因版权原因,仅展示原文概要,查看原文内容请购买。
孤立森林异常检测matlab在MATLAB中,可以使用孤立森林(Isolation Forest)进行异常检测。
以下是一种可能的方法:1、准备数据:首先需要准备要检测的数据集,可以是一个包含多个特征的矩阵或数据框。
确保数据中包含正常和异常数据。
2、划分训练集和测试集:使用MATLAB的cvpartition函数将数据划分为训练集和测试集。
例如,可以使用cvpartition函数将数据划分为80%的训练集和20%的测试集。
3、训练孤立森林模型:使用训练集数据训练孤立森林模型。
在MATLAB 中,可以使用孤立森林函数iforest进行训练。
例如,可以使用以下命令训练孤立森林模型:matlabmdl =孤立森林('modelname','训练数据','NumTrees',100,'IsolationMethod','single','OnFlyClustering',1);其中,'modelname'是模型名称,'训练数据'是训练集数据,'NumTrees'是决策树的数量,'IsolationMethod'是隔离方法,'OnFlyClustering'是是否使用聚类算法。
4、检测异常:使用训练好的孤立森林模型进行异常检测。
在MATLAB 中,可以使用predict函数进行预测。
例如,可以使用以下命令检测测试集中的异常数据:matlabanomalyScore = predict(mdl, testData);其中,'testData'是测试集数据,'anomalyScore'是异常得分。
如果anomalyScore大于0,则表示该数据点是异常数据。
5、可视化结果:可以使用可视化工具将检测到的异常数据可视化。
Value Engineering0引言近年来,曲面屏幕被越来越多的应用在汽车制造当中,其工艺制程主要分为背板段、前板段和模组段。
背板段主要通过蚀刻法在玻璃基板上蚀刻薄膜晶体管(LTPS),用于像素控制;前板段制程主要完成液晶灌注和滤色片层压,需要先对LTPS-TFT基板做清洗、干燥、降温等处理,然后将其放入真空室内进行各发光层和功能层的蒸镀,随后将偏光片贴附于面板;模组段制程主要完成电路和背光板外围组件的组装。
每个阶段都会引起不同的Mura缺陷。
Mura缺陷通常表现为块状,亮度不均匀,形状不规则,对比度低,大多数缺陷没有规律可遵循。
常见的Mura缺陷分为三种类型:点缺陷、线缺陷和区域缺陷[1]。
常见的Mura缺陷的示意图如图1所示。
1常用Mura缺陷检测方法目前,常用的Mura缺陷检测方法有三种,即人工检测法、电测量法和基于机器视觉的光学检测方法。
人工检测方法是指由经验丰富的缺陷检测工程师通过比较缺陷样本库[2]对缺陷的类别进行判断。
电气测量方法通常用于检测由电气缺陷如短路、开路、接触不良、面板上电网线路开路等引起的磁点缺陷和线性磁点缺陷。
常用的电测量方法包括导纳电路检测方法、全屏照明法、探针扫描法、电荷读取法、电压图像法、电子束扫描像素电极法等[3]。
电测量方法无法检测由化学污染等非电气原因引起的Mura缺陷。
这就需要进一步的检测方法,如基于机器视觉的光学测量方法。
这是一种非接触式测量方法,使用图像采集设备获取屏幕上显示的信息,并对其进行定量分析,以确定缺陷的位置和类型。
2Mura缺陷的图像处理在获取图像的过程中,会受到多种因素的影响。
例如,照明设备的光强度的变化,图像采集设备本身的性能,以及工作人员获取图像的熟练程度,等等。
最初获得的原始图像的质量可能不太理想。
为了不影响后续的图像分析、图像解析等处理流程,必须对采集的图像进行一些预处理。
针对Mura缺陷的预处理方法,主要使用了图像滤波和图像校正。
图像小波边缘提取中阈值选取的一种自适应方法
刘宏兵;杨万海;马剑虹
【期刊名称】《西安电子科技大学学报(自然科学版)》
【年(卷),期】2000(027)003
【摘要】文中对Mallat的多尺度小波变换边缘提取算法中阈值的选取作了改进,提出了对整幅图像设置链平均幅度阈值(Tm)的下限和链长度阈值(Tn)后,再采用矩形自适应法选取链平均幅度阈值来对局部模极大值进行删取,在不同尺度上提取图像边缘,而后综合形成图像真正的边缘.实验表明,这种对链平均幅度阈值先设定下限再采用矩形自适应法选取阈值的方法,是一种更加有效的边缘提取方法.
【总页数】3页(P294-296)
【作者】刘宏兵;杨万海;马剑虹
【作者单位】西安电子科技大学,电子工程学院,陕西,西安,710071;西安电子科技大学,电子工程学院,陕西,西安,710071;西安电子科技大学,电子工程学院,陕西,西安,710071
【正文语种】中文
【中图分类】TP317.4
【相关文献】
1.利用小波变换进行水下图像边缘提取的一种方法 [J], 刘维;庞永杰;李岩
2.小波变换的阈值选取及其在细胞图像去噪中的应用 [J], 刘新鸣;朱险峰;王明时
3.一种基于小波变换的雷达图像边缘提取方法 [J], 刘佳敏;周荫清
4.一种改进的基于小波变换的图像边缘提取算法 [J], 高国荣;刘冉;羿旭明
5.一种改进的基于小波变换的图像边缘提取方法 [J], 李红
因版权原因,仅展示原文概要,查看原文内容请购买。
图像的自适应粗糙集边界检测
董桂云
【期刊名称】《计算机工程与应用》
【年(卷),期】2010(046)027
【摘要】粗糙集是解决模糊性、随机性、复杂性和不可分辨性问题的有效工具.利用粗糙集理论,给出了一种基于粗糙集的图像边界检测算法,研究了算法的基本原理、实现和复杂度.实验结果表明该算法在计算速度、抗噪能力、鲁棒性、可控能力和
检测效果等方面,均优于其他边界检测算法.
【总页数】4页(P175-178)
【作者】董桂云
【作者单位】浙江工商大学统计与数学学院,杭州,310012
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.自适应核回归和粗糙集的焊接X射线图像分割研究 [J], 谢奉杰
2.基于自适应滤波窗口的改进型图像边界检测算法 [J], 黄斌; 王角凤; 吴新全
3.基于自适应滤波窗口的改进型图像边界检测算法 [J], 黄斌; 王角凤; 吴新全
4.基于粗糙集自适应粒度的MR脑肿瘤图像分割 [J], 姚传文;黄道斌;叶明全
5.基于粗糙集理论的自适应图像内插 [J], 杜娟;余英林;谢胜利
因版权原因,仅展示原文概要,查看原文内容请购买。
基于邻域阈值分类的小波域图像去噪算法侯建华;熊承义;田金文;柳健【期刊名称】《光电工程》【年(卷),期】2006(33)8【摘要】利用图像小波系数的空间相关性可以有效地去除图像中的噪声.将一维信号的小波邻域阈值扩展并应用于二维图像,子带内的每个小波系数根据其邻域阈值的大小被划分为"大"系数或者是"小"系数;对"小"系数直接置零,对"大"系数则采用一种具有局部空间强相关性的零均值高斯模型,通过最小均方误差准则得到其估计.仿真实验结果表明该算法简单有效,在去噪性能上优于传统的子带自适应阈值去噪方法,可与两种优秀的空间自适应去噪算法相媲美.【总页数】5页(P108-112)【作者】侯建华;熊承义;田金文;柳健【作者单位】中南民族大学,电子信息工程学院,湖北,武汉,430074;华中科技大学,图像识别与人工智能研究所,湖北,武汉430074;中南民族大学,电子信息工程学院,湖北,武汉,430074;华中科技大学,图像识别与人工智能研究所,湖北,武汉430074;华中科技大学,图像识别与人工智能研究所,湖北,武汉430074;华中科技大学,图像识别与人工智能研究所,湖北,武汉430074【正文语种】中文【中图分类】TP391【相关文献】1.基于可调整邻域阈值的DBSCAN算法在应急预案分类管理中的应用 [J], 金保华;林青;赵家明2.基于小波域加权阈值的图像去噪方法 [J], 陈莹;纪志成;韩崇昭3.基于贝叶斯阈值的小波域图像去噪研究 [J], 刘慧4.改进的小波域阈值算法在图像去噪中的应用 [J], 张瑞雪;沈小林5.基于邻域相关性新阈值函数的提升小波域信号降噪法 [J], 薛坚;于盛林因版权原因,仅展示原文概要,查看原文内容请购买。
IEICE TRANS.INF.&SYST.,VOL.Exx–??,NO.xx XXXX200x1 PAPERAutomatic Detection of Region-Mura Defect in TFT-LCDJae Y.LEE†a)and Suk I.YOO†b),NonmembersSUMMARY Visual defects,called mura in thefield,some-times occur during the manufacturing of theflat panel liquid crys-tal displays.In this paper we propose an automatic inspection method that reliably detects and quantifies TFT-LCD region-mura defects.The method consists of two phases.In thefirst phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics and Niblack’s thresholding.In the second phase,based on the human eye’s sensitivity to mura,we quantify mura level for each candidate, which is used to identify real muras by grading them as pass or fail.Performance of the proposed method is evaluated on real TFT-LCD panel samples.key words:Machine vision,image segmentation,regression diagnostics,industrial inspection,visual perception.1.IntroductionRecently,TFT-LCD(Thin Film Transistor Liquid Crystal Display)devices have become a major technol-ogy for FPD(Flat Panel Display).As the FPD market becomes more and more competitive,the quality of the display becomes a more critical issue for manufactur-ers.The most important process to control the quality of the display is to inspect visual defects that some-times occur during the manufacturing of theflat panel liquid crystal displays.Human visual inspection,which is still used by most manufacturers,has a number of drawbacks including limitations of human sensitivity, inconsistent detection due to human subjectivity,and high cost.Automatic inspection using machine vision techniques can overcome many of these disadvantages and offer manufacturers an opportunity to significantly improve quality and reduce costs.One class of defects includes a variety of blem-ishes,called mura∗in thefield,which appear as low contrast and non-uniform brightness regions,typically larger than single pixels[12],[15].They are caused by a variety of physical factors such as non-uniformly dis-Manuscript received January1,2004.Manuscript revised January1,2004.Final manuscript received January1,2004.†The authors are with the School of Computer Science and Engineering,Seoul National University,Shilim-Dong, Gwanak-Gu,Seoul151-742,Korea.a)E-mail:leejy@ailab.snu.ac.krb)E-mail:siyoo@ailab.snu.ac.kr∗Mura is a Japanese word meaning blemish that has been adopted in English to provide a name for imperfections of a display pixel matrix surface that are visible when the display screen is driven to a constant graylevel.Fig.1Example of line-mura,spot-mura,and region-mura de-fects.(a)(b)(c)Fig.2(a)Sample subimage of a TFT-LCD image having a dark region-mura,which position is indicated by an arrow.(b) Thresholding result of(a)using Otsu’s method.(c)Gradient magnitude image of(a).tributed liquid crystal material and foreign particles within the liquid crystal.Depending on the shapes and sizes,mura defects may be classified into spot-mura, line-mura,and region-mura defect.Figure1contains sketch of several mura pared to spot-mura and line-mura,region-mura is relatively difficult to be identified due to its low contrast and irregular pattern of shape.In this paper,we thus present the technique focused on region-mura.The problem of segmenting region-muras reliably from TFT-LCD images is not easy with conventional methods.Although it is not significant,TFT-LCDs generally have the intrinsic non-uniformity due to the variance of the backlight and uneven distributions of liquid crystal material.This overall non-uniformity and the low contrast of the region-mura make it hard to ap-ply simple thresholding directly.Otsu’s method[5], for example,cannot solve the problem properly as il-2IEICE TRANS.INF.&SYST.,VOL.Exx–??,NO.xx XXXX200xPass/Fail Classification Mura Level Quantification(a)(b)(c)(d)(e)(f)Fig.3Overview of our region-mura inspection procedure.(a)Input image.(b)Ex-tracted windows (W ×H pixels).(c)Local segmentation results.(d)Merged segmenta-tion result.(e)Post-processed image.(f)Extracted candidate region-mura which mura level is to be quantified.lustrated in Fig.2(b).Also,region-muras have smooth change of brightness from their surrounding regions,and therefore,they have no clear edge as shown in Fig.2(c).This characteristic invalidates the precondi-tions required for gradient magnitude based approaches [9],[13].Another problem in TFT-LCD image quality inspection is to quantify mura level for each region-mura.Quantification is necessary to control a mura acceptance level according to the panel quality level re-quired by the industry.In this paper we describe an automatic inspec-tion method that reliably detects and quantifies TFT-LCD region-mura defects.The method consists of two phases.In the first phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics [6]and Niblack’s thresholding [7].In the second phase,based on the human eye’s sensitiv-ity to mura,we quantify mura level for each candidate,which is used to identify real muras by grading them as pass or fail.2.Approach OverviewThe overall inspection procedure is shown in Fig.3.For each TFT-LCD panel under test,predefined full-screen constant test patterns are displayed to produce digital input images.Figure 3(a)shows a TFT-LCD panel im-age captured when the display screen is driven to a con-stant gray pattern.Each input image is then divided into overlapping windows for local processing.The win-dow size,W ×H ,and the amount of overlapping,∆W and ∆H ,are estimated from a priori knowledge.Seg-mentation of region-mura is performed on each win-dow and the local segmentation results are merged into a single binary image with their original positions in the input image,as shown in Fig.3(d).This local pro-cessing is to reduce the overall non-uniformity in the input image.The merged binary image is then post-processed by median filtering,morphological closing,and morphological opening [16]to remove noise and re-fine the segmentation result.Finally candidate region-muras are extracted from the post-processed image and their mura levels are quantified in order to identify real muras.The most critical part of our approach is to seg-ment region-muras from each window image,which can be outlined as follows:1.We use the modified regression diagnostics to roughly estimate the background region in the win-dow image.The estimated background region is then approximated by a low order polynomial to generate a background surface.2.Subtraction of the background surface from theY.and SUK I.YOO:AUTOMATIC DETECTION OF REGION-MURA DEFECT IN TFT-LCD3original window image is used tofind threshold to obtain the binary segmentation result.This sub-traction is to remove the influence of non-uniform background and transform the segmentation prob-lem into a simple thresholding one.Section3describes our local segmentation proce-dure in detail.Section4presents human perception model and quantification formula on mura level.In Sect.5,performance of our method is evaluated on real TFT-LCD panel samples,andfinally the conclusion is presented in Sect.6.3.Local Segmentation3.1Background Surface EstimationTo remove the influence of non-uniform background, wefirst have to estimate background surface robustly. The problem of background surface estimation can be viewed as a robust regression problem in datafitting [8].Let I be a window image of size W×H pixels. Each pixel located at(x,y)with the intensity value z xy,called a data pixel,will be denoted by(x,y;z xy) for x=1,...,W,y=1,...,H.The data set is then defined to be a set of data pixels asΨ={(x,y;z xy)|x=1,...,W,y=1,...,H}.(1) The data set is approximated by a bivariate polynomial model f(d)(x,y)of order d,f(d)(x,y)=m+n≤da mn x m y n,(2)such that f(d)(x,y)gives the estimated intensity value at(x,y)for x=1,...,W,y=1,...,H.The residual of the xy th data pixel with respect to f(d),denoted by r xy,is the difference between the original and the estimated intensity of the xy th data pixel given by r xy=z xy−f(d)(x,y).(3) The simplest way to estimate the model parame-ters,a mn’s,may be the least-squares(LS)regression method,in which the model parameters are estimated by minimizing the sum of the squared residuals: minx,yr2xy.(4)The LS method,however,performs poorly in terms of robustness because even a single aberrant data point,or an outlier,can completely perturb the regression result [8].In our approach,we use a modified version of re-gression diagnostics[6]to estimate the background sur-face robustly.Diagnostics are certain quantities com-puted from the data with the purpose of pinpointing aberrant data points,after which these outliers canbe(a)(b)(c)(d)(e)(f)(g)(h)Fig.4The process of local segmentation(l=2,h=4).(a) Input window image.(b)Computed diagnostic measure J.(c) Constructed binary image withα=20.(d)Median-filtered im-age of(c).(e)Estimated background surface f(h)B.(f)Absoluteresiduals with respect to f(h)B.(g)Thresholding result of residu-als with T=2.(h)Post-processing result.removed,followed by a LS analysis on the remaining ones.Our background surface estimation algorithm, when the size of region-mura is upper-bounded byα% of the window size,works as follows:1.For each data pixel p inΨ,a.Remove p from the data set.LetΨ−p be theresulting data set:Ψ−p=Ψ−{p}.(5)b.Determine the polynomial of order lfittingΨ−p,denoted by f(l)−p,using the LS.pute the diagnostic measure J(p)definedto be the mean of the absolute residuals of thedata pixels inΨ−p with respect to f(l)−p:J(p)=1W H−1Ψ−p|z xy−f(l)−p(x,y)|.(6)2.Construct a binary image so thatα%data pixelswhich have small value of J are classified as white with value one and the others as black with value zero.3.Apply medianfiltering to the binary image.4.Remove probable outliers fromΨby excludingdata pixels which correspond to the white pixels in the median-filtered binary image,giving an es-timation of the background region denoted byΨB.5.Determine the polynomial of order hfittingΨB,denoted by f(h)B,using the LS.The order of polynomial for diagnostics measure,l, is determined to be the average order of the background variations of the LCD panel images and the order for final backgroundfitting,h,is determined to be the max-imal order of the background variations,where the or-der of background variation is defined to be the least order of polynomial that canfit the background with4IEICE TRANS.INF.&SYST.,VOL.Exx–??,NO.xx XXXX200x(a)(b)(c)(d)Fig.5Single polynomial model versus composite model:(a)biquadratic (d =2)for both diagnostics measure and background fitting;(b)biquartic (d =4)for both diag-nostics measure and background fitting;(c)–(d)biquadratic for diagnostics measure and biquartic for background fitting.The input image of (c)is same as (a)and the input image of (d)is same as (b).From left to right,each column corresponds to input win-dow image,diagnostics measure,constructed binary image,median-filtered binary image,background surface fit,absolute residuals,thresholding result of residuals (T =2),and post-processing result.acceptable fitting error less than predefined threshold.The process of background surface estimation is il-lustrated in Fig.4.When the size of a region-mura is much less than αW H/100pixels or the window image has no region-mura,however,some background pixels may be included in the α%data pixels.The median filtering solves this problem to some extent as shown in Fig.4(d).It should be pointed out that a fixed thresh-olding on J is not probable since the range of J varies widely over images according to the contrast and size of the region-muras and the degree of non-uniformity of the background.The estimated background surface re-flects the brightness variations of the background quite well as shown in Fig.4(e).We use two polynomial models of different orders for the background surface estimation:one of order l for diagnostics measure and the other of order h for final background fitting.The LCD panel images have varying order of background non-uniformity over im-ages.Therefore,with a fixed single polynomial model,it is hard to fit them effectively:If the order of poly-nomial model is less than the variations in the back-ground,some background pixels,which are not fitted by the model,can be incorrectly classified as outliers (Fig.5(a)).Moreover some weak region-muras can be missed due to incorrect fitting.On the other hand,if the order of polynomial model is too high,it can overfitthe data set including outliers and give unreliable diag-nostic measures especially when the size of the region-mura is large (Fig.5(b)).In our two-model strategy,the overfit is minimized using the low order polynomial model and the possible misclassifications are corrected by the high order model (Fig.5(c)and (d)).3.2ThresholdingPreviously,we have robustly estimated the backgroundsurface f (h )B including the background region ΨB .Let r ∗xy be the residual of the xy th data pixel with respect to f (h )B given byr ∗xy =z xy −f (h )B (x,y ).(7)The segmentation problem is then transformed into a simple thresholding one on the residuals.The threshold is determined based on the distribution of the residuals of the background pixels.Let µbe the residual mean and σbe the standard deviation of the residuals of the background pixels given byµ=1|ΨB |(x,y ;z xy )∈ΨBr ∗xy (8)σ2=1|ΨB |(x,y ;z xy )∈ΨB(r ∗xy −µ)2,(9)Y.and SUK I.YOO:AUTOMATIC DETECTION OF REGION-MURA DEFECT IN TFT-LCD5Fig.6Segmentation examples.where |ΨB |is the cardinality of ΨB .For a given thresh-old T ,according to Niblack’s method [7],the image is then segmented into a binary image so that the defect region is to be white with value one and the background region to be black with value zero as follows:Z (x,y )= 1,|r ∗xy −µ|/σ>T0,|r ∗xy −µ|/σ≤T (10)Resulting binary images are merged into a single binary image as described in Sect.2and then post-processed,giving candidate region-muras.Figure 6shows three examples with their original window images,3D views,and final images processed.In Fig.6,the top image has one dark circular mura,the middle image has two dark muras,and the bottom image has two adjacent bright muras,and all muras were successfully segmented.4.Visual Perception Based IdentificationIn order to identify the real region-muras from the can-didates found in the previous section,the properties of the human visual perception have to be considered.In this section,we first present human perception model and,based on it,formulate a measurement index on mura level.The final identification procedure is then followed.4.1Visual Perception ModelThe fovea is always focused on the object of interest by the accommodation ability of the eye.The typical sim-ulation of the retina consists of object region,object-background region,and surround-background regionFig.7Observation field for computation of mura level.with different luminance stimuli [14].The regions are arranged in the observation field as concentric ones,with the object in the middle followed by the object-background and the surround-background as shown in Fig.7.The object-background is the close neighbor-hood of the object that has a strong influence on the perception of the object.The width of the object-background is set to be a half of radiate distance from the object center such that d 2=0.5d 1.Its size is biolog-ically motivated [10].The surround-background region consists of the area of the complete retina and has a relatively weak influence on the perception of the ob-ject.The luminance stimulus of each region can be simplified to the mean of the gray intensity of the cor-responding region in the image and will be denoted by I o for the object,I b for the object-background,and I s for the surround-background,respectively.If the surround-background is uniform and I s =I b ,we can ignore its influence on the perception of the object [2].4.2Measurement Index on Mura LevelThe ability of the eye to discriminate between changes in luminance is explained by Weber’s law [1]:if L and L +∆L are just noticeably different luminances,∆L/L is nearly a constant C w (C w is Weber’s constant).Ac-cording to Weber’s law,in the luminance term,the level of visibility of an object can be expressed asQ L =|L o −L b |/L bC w,(11)where L o denotes luminance stimuli of the object and L b denotes luminance stimuli of the object-background.The luminance stimuli of the surround-background L s is ignored in Eq.(11),assuming uniform surround such that L s =L b .In the image intensity term,we can discard the influence of L b on ∆L from Eq.(11)since the luminance is unevenly mapped to gray level in FPD devices so that ∆I ’s,the just-noticeable intensity dif-ference,are nearly equal over all gray levels (e.g.,256gray levels for 8-bit display)when the object size is fixed.Under this consideration Eq.(11)can be trans-formed into6IEICE TRANS.INF.&SYST.,VOL.Exx–??,NO.xx XXXX200x(a)(b)(c)(d)Fig.8(a)A window image with a candidate region-mura in the middle.The minimal bounding rectangle is displayed with white color.(b)The region of the candidate region-mura.(c) The intensity-scaled residual image to have maximum255and minimum0.(d)The region of the object-background.Q I=|I o−I b|∆I.(12)The just-noticeable intensity difference(JND),∆I,in-creases quickly as the object area decreases[3].In a recent SEMI standard on FPD[17],using ergonomics approach,the relation between mura area and JND has been formulated asJND=1.97/A0.33+0.72,(13) where A is the area of a mura.Finally,from Eq.(12) and Eq.(13),we thus have the following measurement index on mura level:Q I=|I o−I b|1.97/A0.33+0.72.(14)4.3IdentificationFor each candidate region-mura,wefirst locate a W×H window such that the minimal bounding rectangle of the candidate is centered within the window.Let I be the located window image.Next,by approximating the image surface of I except the pixels belonging to some candidate region-muras detected,we generate a polynomial surface,f(h)B,of order h.The image I isthen subtracted from f(h)B ,giving a residual image R.This subtraction,making I s=I b,removes the influence of non-uniform surround-background on the perception of the candidate region-mura.The object-background region is obtained by dilating the candidate mura region with w /2×h /2structuring element[16]and then by excluding the candidate mura region from the dilation result,where w is the width of the minimal bounding rectangle and h is the height of the rectangle.Finally, we compute the level of the candidate region-mura from Eq.(14)using I o,I b,and A given byI o=1|Ψo|p∈ΨoR(p),(15)I b=1|Ψb|p∈ΨbR(p),and(16)A=|Ψo|,(17)(a)(b)(c)(d)Fig.9Experimental results for three sample TFT-LCD panelimages.(a)Input images.(b)Results from Otsu’s method.(c)Results from Chow and Kaneko’s method.(d)Results from ourmethod.where R(p)is the residual of the data pixel p with re-spect to f(h)B,Ψo is the set of data pixels of the can-didate mura region,andΨb is the set of data pixelsof the object-background region.If the level of a can-didate region-mura exceeds the mura acceptance levelrequired by the industry,the candidate region-mura isidentified to be the real.Figure8shows an exampleillustrating this identification procedure.5.Experiments5.1Experiment IIn thefirst experiment,we compare the segmentationperformance of our method with Chow and Kaneko’sadaptive thresholding method[4].Chow and Kanekoemployed256×256pixel images and divided them into7×7blocks of64×64pixel subimages with a50%overlap.For each subimage having bimodal histogram,a threshold was assigned to the center of it.Thenthe threshold surface was interpolated from these lo-cal thresholds,giving every pixel in the image its ownthreshold.This method historically forms the foun-dation of local thresholding method,and is frequentlycited in the literature.Figure9shows the experimental result for three sample TFT-LCD panel images.The size of each inputimage in Fig.9(a)is256×256pixels.The global thresh-olding results from Otsu’s method[5]are included toshow the underlying non-uniformity in the image back-grounds(Fig.9(b)).Figure9(c)shows the segmenta-tion results from Chow and Kaneko’s method.Theparameters for bimodality test[11]were optimized foreach input image.Figure9(d)is the results from ourmethod withα=20and T=2.Y.and SUK I.YOO:AUTOMATIC DETECTION OF REGION-MURA DEFECT IN TFT-LCD7Mura Level (Q)A r e a (A )Fig.10Plot of mura level and area of all candidate region-muras detected.Candidates claimedby human inspection aredenoted by blue asterisk (∗)and theother candidates by red dot (•).As shown inFig.9(c),with Chow and Kaneko’s method,the muraregions are localized quite well as like with our method,butsome background regions are in-correctly segmented as candidate regions.It is because the bimodality rarely occurs in the backgroundregions and thus interpolation from neighboring thresholds can beineffective.5.2Experiment IIThe next experiment has been performed on 200TFT-LCD panel samples consisting of 30bad panels and 170good panels.Each bad panel has at least one region-mura,totally 40region-mura defects,which was de-tected by human visual inspection in the field.Good panels are those claimed to have no defect.The test patterns were black,blue,gray,green,red,and white,and thus 1,200input images were captured.The resolu-tion of each image is 1280×1024.Each of 1,200panel images was processed using our inspection algorithm with 256×256window size and with l =2,h =4,α=20,and T =2,and total 257candidate have been detected in the first phase when all the identi-cal detections on the same region of a TFT-LCD panel were counted as one.There can be maximum of six detections on the same region of a panel as six differ-ent pattern images are captured for each panel.In the second phase,the mura level values of all candidate region-muras were computed.For multiple detection case,the largest mura level value was selected.The mura level value of each real region-mura claimed by human inspection was then greater than 5.5,which is shown with blue asterisk (∗)in Fig.10,while the aver-age of the mura level values of all the other candidates was less than 5.5,shown with the red dot (•)in Fig.10.Based on this result,the mura level threshold was set to)1(#17.75=I Q )5(#52.43=I Q )10(#54.26=I Q )20(#83.16=I Q )30(#94.10=I Q )40(#02.9=I Q )50(#78.7=I Q )60(#72.5=I Q Fig.11Inspection results and quantification examples.Real region-muras claimed by human inspection are numbered with italic font.be 5.5.Figure 11shows the inspection result for each candidate region-mura and selected candidate images ordered by computed mura level value to demonstrate the correspondence to human visibility.It took 0.49second,in average,to process each panel image.From this experiment,all 40region-muras claimed by human inspection have been successfully detected but 23additional candidates,which are shown in Fig.12,have been also identified to be real when the mura level threshold was set to be 5.5.These 23addi-tional defects identified to be real but not claimed by human inspection reflects the limitations of human vi-sual inspection including inconsistency and weak sensi-tivity.Finally,the mura level threshold can be adjusted according to the panel quality level required by the in-dustry:the threshold can be lowered until all weak defects required by industry can be detected or made larger to detect only serious ones.6.ConclusionFor machine vision inspection for region-muras in TFT-LCD,a technique using mura levels was suggested.From the experiment performed on 200real TFT-LCD8IEICE TRANS.INF.&SYST.,VOL.Exx–??,NO.xx XXXX200xFig.12Additional 23detections not claimed by human in-spection but identified to be real.panel samples,the computed mura level was shown to corresponds to human visibility quite well.In the 200test samples,our method was able to detect all of region-muras claimed by human inspection as well as some other region-muras not caught by human inspec-tion.We thus expect that the identification scheme based on mura level quantification can offer manufac-turers a means to control the panel quality level more consistently.AcknowledgmentsThis work was supported by the Mechatronics Cen-ter of Samsung Electronics Co.,Ltd.with the project of ICT 04212003-0005,and partially by the project of ICT 04212000-0008and the BK21.The ICT at Seoul National University provided research facilities for this work.References[1]S.Hecht,“The visual discrimination of intensity and theWeber-Fechner law,”J.Gen.Physiol.,vol.7,pp.241,1924.[2]P.Moon and D.E Spencer,“The visual effect of nonuniformsurrounds,”J.Opt.Soc.Amer.,vol.35,pp.233–248,1945.[3]H.R.Blackwell,“Contrast thresholds of the human eye,”J.Opt.Soc.Amer.,vol.36,pp.624–643,1946.[4] C.K.Chow and T.Kaneko,“Automatic boundary detec-tion of the left-ventricle from cineangiograms,”Comput.Biomed.Res.,vol.5,pp.388–410,1972.[5]N.Otsu,“A threshold selection method from gray-level his-tograms,”IEEE Trans.Syst.,Man,Cybern.,vol.SMC-9,pp.62–66,1979.[6] D.A.Belsley,E.Kuh,and R.E.Welsch,Regression Diag-nostics,John Wiley &Sons,USA,1980.[7]W.Niblack,An Introduction to Image Processing,Prentice-Hall,1986,pp:115–116.[8]P.J.Rousseeuw and A.M.Leroy,Robust Regression and Outlier Detection.New York:John Wiley &Sons,1987.[9]S.D.Yanowitz and A.M.Bruckstein,“A new method for image segmentation,”Comput.Vision,Graph.,Image Pro-cess.,vol.46,pp.82–95,1989.[10]K.Belkacem-Boussaid, A.Beghdadi,and H.Depoisot,“Edge detection using Holladay’s principle,”Proc.IEEE Int.Conf.Image Processing,vol.1,pp.833–836,1996.[11]J.R.Parker,Algorithms for Image Processing and Com-puter Vision,John Wiley &Sons,1997.[12]W.K.Pratt,S.S.Sawkar and K.O’Reilly,“Automatic blemish detection in liquid crystal flat panel displays,”IS&T/SPIE Symposium on Electronic Imaging:Science and Technology,1998.[13]Francis H.Y.Chan,m,and H.Zhu,“Adaptive thresholding by variational method,”IEEE Trans.Image Processing,vol.7,no.3,pp.468–473,1998.[14]Lars Heucke,Mirko Knaak,and Reinhold Orglmester,“A new image segmentation method based on human bright-ness perception and foveal adaptation,”IEEE Signal Pro-cessing Letters,vol.7,no.6,pp.129–131.June 2000.[15]VESA Flat Panel Display Measurements Standard Ver.2.0,June 1,2001.[16]R.C.Gonzalez,R.E.Woodes,Digital Image Processing.2nd ed.,Prentice Hall,2002.[17]Definition of measurement index (semu)for luminance mura in FPD image quality inspection,SEMI standard:SEMI D31-1102,Jae Y.LEE is currently a Ph.D can-didate in the School of Computer Science and Engineering at Seoul National Uni-versity,Seoul,Korea.He received the BS degree (1996)in mathematics and the MS degree (1998)in computer science from the Seoul National University,Seoul,Ko-rea.His research interests include pattern recognition and machine vision applica-tions.Suk I.YOO has been a professor of the School of Computer Science &En-gineering at the Seoul National Univer-sity,Seoul,Korea since 1985.His research interests include content-based image re-trieval,machine learning,pattern recogni-tion,and bioinformatics.He is a member of IEEE,ACM,AAAI,and SPIE.He re-ceived the BS (1977)from the Seoul Na-tional University,Seoul,Korea,the MS (1980)from the Lehigh University,Beth-lehem,PA,and the Ph.D (1985)in computer engineering from the University of Michigan,Ann Arbor,MI,U.S.A..。