Robust Fingerprint Classification using an Eigen Block Directional Approach
- 格式:pdf
- 大小:204.77 KB
- 文档页数:6
专利名称:METHOD FOR ROBUST CONTROL OF GENEEXPRESSION发明人:Michael J. Flynn,Michael B. Elowitz,AcaciaHori,Viviana Gradinaru申请号:US17100857申请日:20201121公开号:US20210171582A1公开日:20210610专利内容由知识产权出版社提供专利附图:摘要:Disclosed herein include methods, compositions, and kits suitable for robust and tunable control of payload gene expression. Some embodiments provide rationallydesigned circuits, including miRNA-level and/or protein-level incoherent feed-forward loop circuits, that maintain the expression of a payload at an efficacious level. The circuit can comprise a promoter operably linked to a polynucleotide encoding a fusion protein comprising a payload protein, a protease, and one or more self-cleaving peptide sequences. The payload protein can comprise a degron and a cut site the protease is capable of cutting to expose the degron. The circuit can comprise a promoter operably linked to a polynucleotide comprising a payload gene, a silencer effector cassette, and one or more silencer effector binding sequences.申请人:California Institute of Technology地址:Pasadena CA US国籍:US更多信息请下载全文后查看。
Robust Face Recognition via Sparse Representation -- A Q&A about the recent advances in face recognitionand how to protect your facial identityAllen Y. Yang (yang@)Department of EECS, UC BerkeleyJuly 21, 2008Q: What is this technique all about?A: The technique, called robust face recognition via sparse representation, provides a new solution to use computer program to classify human identity using frontal facial images, i.e., the well-known problem of face recognition.Face recognition has been one of the most extensively studied problems in the area of artificial intelligence and computer vision. Its applications include human-computer interaction, multimedia data compression, and security, to name a few. The significance of face recognition is also highlighted by a contrast between human’s high accuracy to recognize face images under various conditions and the computer’s historical poor accuracy.This technique proposes a highly accurate recognition framework. The extensive experiment has shown the method can achieve similar recognition accuracy as human vision, for the first time. In some cases, the method has outperformed what human vision can achieve in face recognition.Q: Who are the authors of this technique?A: The technique was developed in 2007 by Mr. John Wright, Dr. Allen Y. Yang, Dr. S. Shankar Sastry, and Dr. Yi Ma.The technique is jointly owned by the University of Illinois and the University of California, Berkeley. A provisional US patent has been filed in 2008. The technique is also being published in the IEEE Transactions on Pattern Analysis and Machine Intelligence [Wright 2008].Q: Why is face recognition difficult for computers?A: There are several issues that have historically hindered the improvement of face recognition in computer science.1.High dimensionality, namely, the data size is large for face images.When we take a picture of a face, the face image under certain color metrics will be stored as an image file on a computer, e.g., the image shown in Figure 1. Because the human brain is a massive parallel processor, it can quickly process a 2-D image and match the image with the other images learned in the past. However, the modern computer algorithms can only process 2-D images sequentially, meaning, it can only process an image pixel-by-pixel. Hence although the image file usually only takes less than 100 K Bytes to store on computer, if we treat each image as a sample point, it sits in a space of more than 10-100 K dimension (that is each pixel owns an individual dimension). Any pattern recognition problem in high-dimensional space (>100 D) is known to be difficult in the literature.Fig. 1. A frontal face image from the AR database [Martinez 1998]. The size of a JPEG file for this image is typically about 60 Kbytes.2.The number of identities to classify is high.To make the situation worse, an adult human being can learn to recognize thousands if not tens of thousands of different human faces over the span of his/her life. To ask a computer to match the similar ability, it has to first store tens of thousands of learned face images, which in the literature is called the training images. Then using whatever algorithm, the computer has to process the massive data and quickly identify a correct person using a new face image, which is called the test image.Fig. 2. An ensemble of 28 individuals in the Yale B database [Lee 2005]. A typical face recognition system needs to recognition 10-100 times more individuals. Arguably an adult can recognize thousands times more individuals in daily life.Combine the above two problems, we are solving a pattern recognition problem to carefully partition a high-dimensional data space into thousands of domains, each domain represents the possible appearance of an individual’s face images.3.Face recognition has to be performed under various real-world conditions.When you walk into a drug store to take a passport photo, you would usually be asked to pose a frontal, neutral expression in order to be qualified for a good passport photo. The store associate will also control the photo resolution, background, and lighting condition by using a uniform color screen and flash light. However in the real world, a computer program is asked to identify humans without all the above constraints. Although past solutions exist to achieve recognition under very limited relaxation of the constraints, to this day, none of the algorithms can answer all the possible challenges, including this technique we present.To further motivate the issue, human vision can accurately recognize learned human faces under different expressions, backgrounds, poses, and resolutions [Sinha 2006]. With professional training, humans can also identify face images with facial disguise. Figure 3 demonstrates this ability using images of Abraham Lincoln.Fig. 3. Images of Abraham Lincoln under various conditions (available online). Arguably humans can recognize the identity of Lincoln from each of these images.A natural question arises: Do we simply ask too much for a computer algorithm to achieve? For some applications such as at security check-points, we can mandate individuals to pose a frontal, neural face in order to be identified. However, in most other applications, this requirement is simply not practical. For example, we may want to search our photo albums to find all the images that contain our best friendsunder normal indoor/outdoor conditions, or we may need to identify a criminal suspect from a murky, low-resolution hidden camera who would naturally try to disguise his identity. Therefore, the study to recognize human faces under real-world conditions is motivated not only by pure scientific rigor, but also by urgent demands from practical applications.Q: What is the novelty of this technique? Why is the method related to sparse representation?A: The method is built on a novel pattern recognition framework, which relies on a scientific concept called sparse representation. In fact, sparse representation is not a new topic in many scientific areas. Particularly in human perception, scientists have discovered that accurate low-level and mid-level visual perceptions are a result of sparse representation of visual patterns using highly redundant visual neurons [Olshausen 1997, Serre 2006].Without diving into technical detail, let us consider an analogue. Assume that a normal individual, Tom, is very good at identifying different types of fruit juice such as orange juice, apple juice, lemon juice, and grape juice. Now he is asked to identify the ingredients of a fruit punch, which contains an unknown mixture of drinks. Tom discovers that when the ingredients of the punch are highly concentrated on a single type of juice (e.g., 95% orange juice), he will have no difficulty in identifying the dominant ingredient. On the other hand, when the punch is a largely even mixture of multiple drinks (e.g., 33% orange, 33% apple, and 33% grape), he has the most difficulty in identifying the individual ingredients. In this example, a fruit punch drink can be represented as a sum of the amounts of individual fruit drinks. We say such representation is sparse if the majority of the juice comes from a single fruit type. Conversely, we say the representation is not sparse. Clearly in this example, sparse representation leads to easier and more accurate recognition than nonsparse representation.The human brain turns out to be an excellent machine in calculation of sparse representation from biological sensors. In face recognition, when a new image is presented in front of the eyes, the visual cortex immediately calculates a representation of the face image based on all the prior face images it remembers from the past. However, such representation is believed to be only sparse in human visual cortex. For example, although Tom remembers thousands of individuals, when he is given a photo of his friend, Jerry, he will assert that the photo is an image of Jerry. His perception does not attempt to calculate the similarity of Jerry’s photo with all the images from other individuals. On the other hand, with the help of image-editing software such as Photoshop, an engineer now can seamlessly combine facial features from multiple individuals into a single new image. In this case, a typical human would assert that he/she cannot recognize the new image, rather than analytically calculating the percentage of similarities with multiple individuals (e.g., 33% Tom, 33% Jerry, 33% Tyke) [Sinha 2006].Q: What are the conditions that the technique applies to?A: Currently, the technique has been successfully demonstrated to classify frontal face images under different expressions, lighting conditions, resolutions, and severe facial disguise and image distortion. We believe it is one of the most comprehensive solutions in face recognition, and definitely one of the most accurate.Further study is required to establish a relation, if any, between sparse representation and face images with pose variations.Q: More technically, how does the algorithm estimate a sparse representation using face images? Why do the other methods fail in this respect?A: This technique has demonstrated the first solution in the literature to explicitly calculate sparse representation for the purpose of image-based pattern recognition. It is hard to say that the other extant methods have failed in this respect. Why? Simply because previously investigators did not realize the importance of sparse representation in human vision and computer vision for the purpose of classification. For example, a well-known solution to face recognition is called the nearest-neighbor method. It compares the similarity between a test image with all individual training images separately. Figure 4 shows an illustration of the similarity measurement. The nearest-neighbor method identifies the test image with a training image that is most similar to the test image. Hence the method is called the nearest neighbor. We can easily observe that the so-estimated representation is not sparse. This is because a single face image can be similar to multiple images in terms of its RGB pixel values. Therefore, an accurate classification based on this type of metrics is known to be difficult.Fig. 4. A similarity metric (the y-axis) between a test face image and about 1200 training images. The smaller the metric value, the more similar between two images. Our technique abandons the conventional wisdom to compare any similarity between the test image and individual training images or individual training classes. Rather, the algorithm attempts to calculate a representation of the input image w.r.t. all available training images as a whole. Furthermore, the method imposes one extra constraint that the optimal representation should use the smallest number of training images. Hence, the majority of the coefficients in the representation should be zero, and the representation is sparse (as shown in Figure 5).Fig. 5. An estimation of sparse representation w.r.t. a test image and about 1200 training images. The dominant coefficients in the representation correspond to the training images with the same identity as the input image. In this example, the recognition is based on downgraded 12-by-10 low-resolution images. Yet, the algorithm can correctly identify the input image as Subject 1.Q: How does the technique handle severe facial disguise in the image?A: Facial disguise and image distortion pose one of the biggest challenges that affect the accuracy of face recognition. The types of distortion that can be applied to face images are manifold. Figure 6 shows some of the examples.Fig. 6. Examples of image distortion on face images. Some of the cases are beyond human’s ability to perform reliable recognition.One of the notable advantages about the sparse representation framework is that the problem of image compensation on distortion combined with face recognition can be rigorously reformulated under the same framework. In this case, a distorted face image presents two types of sparsity: one representing the location of the distorted pixels in the image; and the other representing the identity of the subject as before. Our technique has been shown to be able to handle and eliminate all the above image distortion in Figure 6 while maintaining high accuracy. In the following, we present an example to illustrate a simplified solution for one type of distortion. For more detail, please refer to our paper [Wright 2008].Figure 7 demonstrates the process of an algorithm to recognize a face image with severe facial disguise by sunglasses. The algorithm first partitions the left test image into eight local regions, and individually recovers a sparse representation per region. Notice that with the sunglasses occluding the eye regions, the corresponding representations from these regions do not provide correct classification. However, when we look at the overall classification result over all regions, the nonocclused regions provide a high consensus for the image to be classified as Subject 1 (as shownin red circles in the figure). Therefore, the algorithm simultaneously recovers the subject identity and the facial regions that are being disguised.Fig. 7. Solving for part-based sparse representation using local face regions. Left: Test image. Right: Estimation of sparse representation and the corresponding classification on the titles. The red circle identifies the correct classiciation.Q: What is the quantitative performance of this technique?A: Most of the representative results from our extensive experiment have been documented in our paper [Wright 2008]. The experiment was based on two established face recognition databases, namely, the Extended Yale B database [Lee 2005] and the AR database [Martinez 1998].In the following, we highlight some of the notable results. On the Extended Yale B database, the algorithm achieved 92.1% accuracy using 12-by-10 resolution images, 93.7% using single-eye-region images, and 98.3% using mouth-region images. On the AR database, the algorithm achieves 97.5% accuracy on face images with sunglasses disguise, and 93.5% with scarf disguise.Q: Does the estimation of sparse representation cost more computation and time compared to other methods?A: The complexity and speed of an algorithm are important to the extent that they do not hinder the application of the algorithm to real-world problems. Our technique uses some of the best-studied numerical routines in the literature, namely, L-1 minimization to be specific. The routines belong to a family of optimization algorithms called convex optimization, which have been known to be extremely efficient to solve on computer. In addition, considering the rapid growth of the technology in producing advanced micro processors today, we do not believe there is any significant risk to implement a real-time commercial system based on this technique.Q: With this type of highly accurate face recognition algorithm available, is it becoming more and more difficult to protect biometric information and personal privacy in urban environments and on the Internet?A: Believe it or not, a government agency, a company, or even a total stranger can capture and permanently log your biometric identity, including your facial identity, much easier than you can imagine. Based on a Time magazine report [Grose 2008], a resident living or working in London will likely be captured on camera 300 times per day! One can believe other people living in other western metropolitan cities are enjoying similar “free services.” If you like to stay indoor and blog on the Internet, your public photo albums can be easily accessed over the nonprotected websites, and probably have been permanently logged by search engines such as Google and Yahoo!.With the ubiquitous camera technologies today, completely preventing your facial identity from being obtained by others is difficult, unless you would never step into a downtown area in big cities and never apply for a driver’s license. However, there are ways to prevent illegal and involuntary access to your facial identity, especially on the Internet. One simple step that everyone can choose to do to stop a third party exploring your face images online is to prevent these images from being linked to your identity. Any classification system needs a set of training images to study the possible appearance of your face. If you like to put your personal photos on your public website and frequently give away the names of the people in the photos, over time a search engine will be able to link the identities of the people with the face images in those photos. Therefore, to prevent an unauthorized party to “crawl” into your website and sip through the valuable private information, you should make these photo websites under password protection. Do not make a large amount of personal images available online without consent and at the same time provide the names of the people on the same website.Previously we have mentioned many notable applications that involve face recognition. The technology, if properly utilized, can also revolutionize the IT industry to better protect personal privacy. For example, an assembly factory can install a network of cameras to improve the safety of the assembly line but at the same time blur out the facial images of the workers from the surveillance videos. A cellphone user who is doing teleconferencing can activate a face recognition function to only track his/her facial movements and exclude other people in the background from being transmitted to the other party. All in all, face recognition is a rigorous scientific study. Its sole purpose is to hypothesize, model, and reproduce the image-based recognition process with accuracy comparable or even superior to human perception. The scope of its final extension and impact to our society will rest on the shoulder of the government, the industry, and each of the individual end users. References[Grose 2008] T. Grose. When surveillance cameras talk. Time (online), Feb. 11, 2008.[Lee 2005] K. Lee et al.. Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, 2005.[Martinez 1998] A. Martinez and R. Benavente. The AR face database. CVC Tech Report No. 24, 1998.[Olshausen 1997] B. Olshausen and D. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, vol. 37, 1997.[Serre 2006] T. Serre. Learning a dictionary of shape-components in visual cortex: Comparison with neurons, humans and machines. PhD dissertation, MIT, 2006.[Sinha 2006] P. Sinha et al.. Face recognition by humans: Nineteen results all computer vision researchers should know about. Proceedings of the IEEE, vol. 94, no. 11, November 2006.[Wright 2008] J. Wright et al.. Robust face recognition via sparse representation. (in press) IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.。
基于形态学处理与模式识别的手部图像分割作者:罗伟等来源:《现代电子技术》2015年第12期摘要:为了更有效地分割手部图像,提出一种基于彩色阈值变换、形态学处理与模式识别的手部识别算法。
利用手部与周围背景在RGB颜色分量上存在一定的差异,先通过RGB阈值变幻将彩色图像二值化,然后进行闭运算与孔洞填充,最后根据周长面积比去除背景中的孤立点集以提取手部。
实验证明,该算法能够较为有效地将手部从背景中提取出来。
关键词:手部图像分割;彩色阈值;形态学处理;特征提取;模式识别中图分类号: TN911.73⁃34; TP751 文献标识码: A 文章编号: 1004⁃373X(2015)12⁃0080⁃030 引言手部识别属于模式识别问题,其中涉及图像处理和特征提取操作。
由于手部识别的应用范围广阔,许多学者及研究机构都在对此进行研究。
在已有的研究成果和算法中,最常使用的方法是机器视觉技术[1],很多研究者通过诸多不同的机器视觉技术实现了精确有效地手部识别。
Bhuyan.M.K等人提出一组新的描述手部连续姿态的特征集[2]。
郭训力等人提出一种融合肤色模型和三维深度信息的手部识别方法[3]。
覃文军等人提出了通过手部形状特征检测的手势感兴趣区提取方法[4]。
魏来等人利用Kinect 的关节点信息和肤色颜色模型提取出手部区域[5]。
柴功博等人提出一种基于手掌分割的摄像机阵列手部定位技术[6]。
DE.STEFANO.C等人将遗传算法应用于模式识别中对手部图像的处理[7]。
此外,还有通过外部设备获取生物电信号实现手部识别的方法,如郭一娜等人提出一种基于肌电信号与柔性神经树(Flexible Neural Trees)FNT模型的实时手势识别模型[8]。
以上这些算法都过于复杂,不利于工程实现。
本文意在寻找一种简单且复杂度较低的算法来解决手部的识别问题。
本文根据手部与周围背景在RGB颜色分量上的差异,对彩色图像阈值变换所得到的二值图像,进行开闭操作及特征提取,最后完成手部的识别过程。
多模态手部生物特征识别技术分析作者:夏鹏来源:《电子技术与软件工程》2015年第07期摘要生物特征识别技术即利用人独特的生理以及行为特征为依据,来实现不同人身份的鉴别,减少身份窃取事件的发生。
多模态手部生物特征识别技术,虽然目前作为主要研究方向已经取得了一定的成果,但是在实际应用上来说还存在部分亟需解决的问题,必须要对其做更进一步的研究。
本文分析了手部生物特征识别技术与特点,并对识别技术的实现方式进行了简要分析。
【关键词】多模态手部生态特征识别科学技术的快速发展,使得现在逐渐涌现出更多安全识别技术,其中生物特征识别已经成为最具潜力的身份认证技术之一。
为保证信息与人身安全,需要以提高安全性为前提,对各项识别技术进行更深的研究。
多模态手部生物特征识别技术的实现,需要通过手指生物特征采集系统以及手背静脉采集系统对各项数据的收集,建立新的识别算法,做好关键性技术的管理,实现多模态手部生物特征识别技术的功能。
1 多模态手部生物特征识别技术概述生物特征识别系统一般由传感器、特征提取、匹配以及决策四个模块组成,其中单生物特征组合与多生物特征融合可以体现在任何一个模块,则应用多模态生物特征识别时,可以通过单生物特征多传感器、多生物特征、单生物特征多表达方式、单生物特征多单位以及单生物特征多匹配器几种方式实现[1]。
多模态手部生物特征识别技术,即以人手部为基础,通过掌纹、手形、手指纹理以及静脉等生物特征来达到身份识别的目的。
对于多模态手部生物特征识别技术的研究,需要明确手部各特征识别技术原理以及特点,然后确定相应的采集系统与算法,利用专业技术进行处理,实现多模态手部生物特征识别功能。
2 多种手部生态特征识别技术原理与特点分析2.1 指纹识别指纹即与手掌同侧并位于手指末端皮肤上凹凸相间的乳突状纹理,对于不同人其指纹在几何形态、端点以及分叉点等特征上都存在明显的区别,并且此项特征具有永久性特点,因此可以将其作为一个身份识别的特征,通过验证来确定其身份。
美国科学家发明高敏感度机械手指佚名【期刊名称】《机械》【年(卷),期】2012(39)7【摘要】美国研究人员开发出一种触摸敏感度超过人手的机器人手指。
这种名为BioTac的传感器能识别随机选择的材料,准确率高达95%,超过人手的准确度。
【总页数】1页(PI0005-I0005)【关键词】美国科学家;机器人手指;敏感度;机械;发明;研究人员;随机选择;传感器【正文语种】中文【中图分类】TP242【相关文献】1.会发电的太阳能"布料"·科学家证实中微子质量·美国开发"非脂肪化"食品·我国新一代互联网关键技术获重大突破·日本实现半导体结晶光控制·科学家发现阻止衰老新法·俄发明抗癌新疗法·科学家揭示肝炎癌变秘密·私人载人太空飞船飞行成功 [J],2.美国科学家发明手指阅读器闭着眼睛也能读书 [J],3.美发射卫星探索慧星组成/美科学家开发出新型磁阻装置高灵敏创新记录/日用中子束高速摄像成功速度达千分之一秒/以色列科学家培育出杂交无毛鸡/日本国家航空实验室研制新一代超音速客机/"牙齿电话"成新宠/美企业家发明个人飞行器/四川搭乘神舟邀游太空的水稻长势良好/50万的高科技公厕投入使用/方轮自行车照样骑着跑/"宝"玩具商推出新款机器人/便携式基因解析装置问世/杭州农科所培育出巨型南瓜 [J],4.我国非药物戒毒取得重大突破/ 从太空吸收电力/ 德国科学家发现能够抑制全球变暖的微生物/ 英国考古发现4000年前的人头骨曾接受开颅手术/ 日本开发仿人集成电路/ 英采用基因疗法治愈无免疫力病童/ 澳大利亚科学家研究出微型传感器/美国科学家发明种植鱼肉方法 [J],5.一名美国科学家和两名日本科学家因发明复杂有机物有效合成方法获2010年诺贝尔化学奖 [J],因版权原因,仅展示原文概要,查看原文内容请购买。
学术讨论 分子生物学技术鉴定药材王建云 熊丽娟1(昆明650011云南省药品检验所;1昆明650011昆明市中医院)摘要 目的:介绍分子生物学技术鉴定中药材的研究进展,供药学工作者参考。
方法:从DN A提取,D NA的R AP D分析及DN A序列测定三个方面进行综述,讨论了这些方法的优缺点。
结果与结论:认为分子生物学技术解决了动物药材鉴定的难题,将是今后中药材真伪鉴定技术发展的新方向。
关键词 D N A序列测定;RA P D技术;鉴定药材中药材的真伪优劣直接影响着中药的质量和疗效。
为此人们作了大量的研究,建立了一些确实有效的评价药材质量的方法[1],其鉴定标准也从最初仅有性状,逐渐增加了显微、理化、色谱、光谱等方法。
但是,对一部分药材,仅以上述方法是难以达到鉴定的目的,例如,鹿鞭[2]、前胡[3]、木蓝[4]等。
因此人们不断研究更完善、更准确、更具科学性的方法来解决药材鉴定问题。
本世纪80年代,分子分类学的问世,把生物分类的手段从形态表征,扩展到了生物的遗传物质DN A。
由于同种生物体具相同的DN A系列,不同种生物具不同的D N A系列,这便使人们可依据DN A系列的差异来鉴定生物物种。
药材除矿物类外,大多数来源于动物和植物。
但由于动植物药材多数是死亡的干燥生物体或生物的组织器官,在生物死亡的过程中,细胞会产生核酸酶大量降解DN A,这给药材D N A的分析带来困难。
随着分子生物学技术的飞速发展,特别是微量D NA提取技术[5~7]和多聚酶链式反应(polymer ase chain reactio n,P CR)技术[8]的发展,使人们能够从药材中提取微量的DN A进行分析,这就为用D NA分析技术鉴定药材提供了可能。
近4年来,这方面的研究集中在药材DN A提取、随机扩增多态DN A(random amplified polym orphic D N A,PA PD)和DN A序列测定几个方面,现简述如下。
Fingerprint Matching:Among all the biometric techniques,fingerprint-based identification is the oldest method which has been successfully used in numerous applications.Everyone is known to have unique, immutable fingerprints.A fingerprint is made of a series of ridges and furrows on the surface of the finger.The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points.Minutiae points are local ridge characteristics that occur at either a ridge bifurcation or a ridge ending.Fingerprint matching techniques can be placed into two categories:minutae-based and correlation based.Minutiae-based techniques first find minutiae points and then map their relative placement on the finger.However,there are some difficulties when using this approach.It is difficult to extract the minutiae points accurately when the fingerprint is of low quality.Also this method does not take into account the global pattern of ridges and furrows.The correlation-based method is able to overcome some of the difficulties of the minutiae-based approach.However,it has some of its own shortcomings.Correlation-based techniques require the precise location of a registration point and are affected by image translation and rotation.Fingerprint matching based on minutiae has problems in matching different sized(unregistered) minutiae patterns.Local ridge structures can not be completely characterized by minutiae.We are trying an alternate representation of fingerprints which will capture more local information and yield a fixed length code for the fingerprint.The matching will then hopefully become a relatively simple task of calculating the Euclidean distance will between the two codes.We are developing algorithms which are more robust to noise in fingerprint images and deliver increased accuracy in real-time.A commercial fingerprint-based authentication system requires a very low False Reject Rate(FAR)for a given False Accept Rate(FAR).This is very difficult to achieve with any one technique.We are investigating methods to pool evidence from various matching techniques to increase the overall accuracy of the system.In a real application,the sensor,the acquisition system and the variation in performance of the system over time is very critical.We are also field testing our system on a limited number of users to evaluate the system performance over a period of time.Fingerprint Classification:Large volumes of fingerprints are collected and stored everyday in a wide range of applications including forensics,access control,and driver license registration.An automatic recognition of people based on fingerprints requires that the input fingerprint be matched with a large number of fingerprints in a database(FBI database contains approximately70million fingerprints!).To reduce the search time and computational complexity,it is desirable to classify these fingerprints in an accurate and consistent manner so that the input fingerprint is required to be matched only with a subset of the fingerprints in the database.Fingerprint classification is a technique to assign a fingerprint into one of the several pre-specified types already established in the literature which can provide an indexing mechanism.Fingerprint classification can be viewed as a coarse level matching of the fingerprints.An input fingerprint is first matched at a coarse level to one of the pre-specified types and then,at a finer level,it is compared to the subset of the database containing that type of fingerprints only.We have developed an algorithm to classify fingerprints into five classes,namely,whorl,right loop,left loop,arch,and tented arch.The algorithm separates the number of ridges present in four directions(0degree,45degree,90degree,and135degree)by filtering the central part of a fingerprint with a bank of Gabor filters.This information is quantized to generate a FingerCode which is used for classification.Our classification is based on a two-stage classifier which uses a K-nearest neighbor classifier in the first stage and a set of neural networks in the second stage. The classifier is tested on4,000images in the NIST-4database.For the five-class problem, classification accuracy of90%is achieved.For the four-class problem(arch and tented arch combined into one class),we are able to achieve a classification accuracy of94.8%.By incorporating a reject option,the classification accuracy can be increased to96%for the five-class classification and to97.8%for the four-class classification when30.8%of the images are rejected.Fingerprint Image Enhancement:A critical step in automatic fingerprint matching is to automatically and reliably extract minutiae from the input fingerprint images.However,the performance of a minutiae extraction algorithm relies heavily on the quality of the input fingerprint images.In order to ensure that the performance of an automatic fingerprint identification/verification system will be robust with respect to the quality of the fingerprint images,it is essential to incorporate a fingerprint enhancement algorithm in the minutiae extraction module.We have developed a fast fingerprint enhancement algorithm,which can adaptively improve the clarity of ridge and furrow structures of input fingerprint images based on the estimated local ridge orientation and frequency.We have evaluated the performance of the image enhancement algorithm using the goodness index of the extracted minutiae and the accuracy of an online fingerprint verification system.Experimental results show that incorporating the enhancement algorithms improves both the goodness index and the verification accuracy.。