A New Multi-focus Image Fusion Method Using Principal
- 格式:pdf
- 大小:1.90 MB
- 文档页数:7
本科毕业设计论文题目多聚焦图像融合算法研究专业名称学生姓名指导教师毕业时间毕业 任务书一、题目多聚焦图像融合算法研究二、指导思想和目的要求本题目来源于科研,主要研究多聚焦图像的概念,学习多聚焦图像的常用融合算法,进而实现相关算法。
希望通过该毕业设计,学生能达到:1.利用已有的专业知识,培养学生解决实际工程问题的能力;2.锻炼学生的科研工作能力和培养学生团队合作及攻关能力。
三、主要技术指标1.学习多聚焦图像的特点;2.研究多聚焦图像的融合算法;3.实现多聚焦图像的融合。
四、进度和要求第01周----第02周: 参考翻译英文文献;第03周----第04周: 学习多聚焦图像的特点;第05周----第08周: 研究多聚焦图像的融合算法;第09周----第14周: 编写多聚焦图像的融合程序;第15周----第16周: 撰写毕业设计论文,论文答辩。
五、主要参考书及参考资料1.张德丰.MATLAB 数字图像处理[M].北京:机械工业出版社,2012.2. 敬忠良. 图像融合——理论与应用[M].北京:高等教育出版社,2010.3. 郭雷. 图像融合[M]. 北京:电子工业出版社,2011.4. 孙巍. 孙巍. 像素及多聚焦图像融合算法研究[D].长春:吉林大学,2008.5. 马先喜. 多聚焦图像融合算法研究[D].无锡:江南大学,2012.学生 指导教师 系主任 __ __设计论文摘要图像融合是将同一对象的两个或多个图像按一定规则合成为一幅图像。
其关键是抽取每幅源图像中的清晰区域,并将这些清晰区域以一定的规则融合起来,从而生成一幅清晰且信息量完整的融合图像。
多聚焦图像融合的具体目标在于提高图像的空间分辨率、改善图像的几何精度、增强特征显示能力、改善分类精度、替代或修补图像数据的缺陷等。
本文概括了多聚焦图像融合的一些基本概念和相关的基本知识,对DWT分解的层数和方向子带的个数对融合结果的影响进行了初步的研究。
2021574彩色图像灰度化是图像处理和计算机视觉领域的基本课题和重要前提,是将三维通道信息转换为一维灰度数据的过程。
为了节约成本,人们仍使用黑白打印,并且许多出版物的大部分图片是灰度图像。
生活中还存在很多更有艺术效果的黑白图像,由此衍生了灰度图像在艺术美学方面的应用,如中国水墨画渲染、黑白摄影等[1]。
为了减少输入图像的信息量或者减少后续的运算量,都需要将彩色图像进行灰度化处理,其在图像预处理等方面有很多应用,如边缘检测[2-3]、特征提取[4-5]等。
为了使灰度化后的图像更好地保留彩色图像特征,许多方法被相继提出。
根据算法中映射函数是否可应用于整幅图像的所有像素,常见的灰度化算法大致可以分为两类:全局映射法和局部映射法。
在局部映射法中,灰度值随着空间位置而改变,将不同的灰度值赋给相同的颜色以增强灰度图像局部对比度,容易受相邻像素的影响。
2004年,Bala等人[6]将高频色度信息引入亮度通道,局部保留了相邻颜色之间的差异。
Smith等人[7]使用拉普拉斯金字塔提取图像多层特征,根据彩色与灰度图像色对比度比例来调整拉普拉斯各分层灰度值,增强不明显的边缘,进行对比度调整。
卢红阳等人[8]提出一种基于最大加权投影求解的算法,建立最大化的加权局部保留投影模型,提出最大加权投影的目标优化函数。
局部映射法试图找出颜色在三维的局部差异,通过控制像素的亮度值,从而精确地保留图像的局部特征,但无法保证全局颜色的一致性,把一个整体图像转换为非齐次的,最终求出的灰度图像彩色图像多尺度融合灰度化算法顾梅花,王苗苗,李立瑶,冯婧西安工程大学电子信息学院,西安710600摘要:为了使彩色图像灰度化后能够保留更多的原始特征,提出了一种新的基于多尺度图像融合的灰度化算法。
将彩色图像分解为R、G、B三个通道图像,采用基于高斯-拉普拉斯金字塔的多尺度图像融合模型进行灰度化,并引入梯度域导向图像滤波(Gradient Domain Guided Image Filter,GGIF)来消除多尺度融合可能产生的伪影。
一种新的结合SVM和FNN的多聚焦图像融合算法:To deal with the problems of cracks among blocks and the uncertainty of real characteristics of image fusion based on block, this paper proposed a new multifocus image fusion method by combining support vector machine (SVM)wits fuzzy neural network (FNN). Firstly, FCM and SVM were used to obtain the parameters of FNN and the block was divided into clear, blurring and transitional zones based on the FNN. Then the three classified areas were merged with weighting to get the fused multifocus images, where the weight factors were obtained as the defuzzication outputs of the fuzzy neural network. Finally, the qualities of various fusion algorithm were evaluated by the root mean square error(RMSE), the mean absolute error(MAE) and peak signal to noise ratio(PSNR). The experimental results show that the proposed fusion algorithm has good robustness and computing performance, which basically meets the demand of practical image fusion, and the fusion quality evaluations illustrate that our method has an advantage over the existing fusion algorithm.1 引言.图像信息融合是信息融合的一个重要分支,通过对同一场景下多个传感器获取的图像信息进行有机的综合,生成信息量更加全面、精确、完整的图像,以弥补单一传感器获取信息的局限性[1]。
基于GHM多小波变换的非织造布多焦面图像融合陈阳;辛斌杰;邓娜【摘要】针对光学显微镜在单一焦平面下拍摄的织物图像部分区域纤维会模糊的问题,提出基于GHM多小波变换的非织造布多焦面图像融合算法.利用自行搭建的非织造布显微成像系统采集不同焦平面下的织物图像序列,对初始图像序列进行临界采样预滤波处理,使用2种融合方法逐一处理图像的高低频,初始织物融合图像经多小波融合及逆变换后获得,之后按上述方法将初始融合图像与后续单焦面图像融合,叠加循环至融合后所有纤维区域均能清晰显示为止结束收敛.实验结果表明,该融合方法能将不同焦平面下拍摄的图像序列进行数字化图像融合,达到单幅图像内全视野区域的纤维网清晰聚焦融合的效果,为之后的计算机图像处理及测量提供便利.【期刊名称】《纺织学报》【年(卷),期】2019(040)006【总页数】8页(P125-132)【关键词】临界采样预滤波;GHM多小波;多焦面融合;非织造布图像;显微成像【作者】陈阳;辛斌杰;邓娜【作者单位】上海工程技术大学电子电气工程学院,上海 201620;上海工程技术大学服装学院,上海 201620;上海工程技术大学电子电气工程学院,上海 201620【正文语种】中文【中图分类】TP311.1非织造布是由纤维随机或定向排列而成,生产以纺黏法为主。
非织造布的性能与纤维网的孔隙构造紧密相关,而纤维的厚度、宽度、取向度以及纤维网的形成方式等都与其构造有关,因此能够得到这些结构参数进而找到性能间的联系,对生产及用途都具有十分重要的指导意义。
目前主要使用间接法对非织造布孔隙结构进行解析,而其存在的问题集中在费时费力且不能考虑到孔结构的复杂性。
计算机数字图像处理技术的发展为研究非织造布结构和性能提供了有效工具。
图像质量对纤维的形态测量与结构解析至关重要。
非织造布的厚度太大使得一般光学显微镜的景深不足以将所有纤维清晰地显现在一幅图像中。
基于这种不完全聚焦图像的测量,纤维结构将是不准确的,甚至会对后续处理有一定的误导性[1]。
基于Contourlet变换的多聚焦图像融合作者:丁岚来源:《电脑知识与技术》2008年第34期摘要:由于可见光成像系统的聚焦范围有限,很难获得同一场景内所有物体都清晰的图像,多聚焦图像融合技术可有效地解决这一问题。
Contourlet变换具有多尺度多方向性,将其引入图像融合,能够更好地提取原始图像的特征,为融合图像提供更多的信息。
该文提出了一种基于区域统计融合规则的Contourlet变换多聚焦图像融合方法。
先对不同聚焦图像分别进行Contourlet变换,采用低频系数取平均,高频系数根据区域统计值决定的融合规则,再进行反变换得到融合结果。
文中给出了实验结果,并对融合结果进行了分析比较,实验结果表明,该方法能够取得比基于小波变换融合方法更好的融合效果。
关键词:图像融合;Contourlet变换;小波变换中图分类号:TP391文献标识码:A文章编号:1009-3044(2008)34-1700-03Multifocus Image Fusion Based on Contorlet TransformDING Lan(College of Information Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)Abstract: Due to the limited depth-of-focus of optical lenses , it is often difficult to get an image that contains all relevant objects in focus. Multifocus image fusion method can solve this problem effectively. Contoulet transform has varying directions and multiple scales. When the contourlet transform is introduced to image fusion , the characteristics of original images are taken better and more information for fusion is obtained. A new multifocus image fusion method is proposed in this paper, based on contourlet transform with the fusion rule of region statistics. Different focus images are decomposed using contourlet transform firstly, then low-bands are integrated using the weighted average , high-bands are integrated using region statistics rule. Then the fused image will be obtained by inverse contourlet transform. The experimental results are showed, and compared with the method based on wavelet transform. Experiments show that this approach can achieve better results than the method based on wavelet transform.Key words: image fusion; contourlet transform; wavelet transform1 引言对于可见光成像系统来讲,由于成像系统的聚焦范围有限,场景中的所有目标很难同时都成像清晰,这一问题可以采用多聚焦图像融合技术来解决,即用同一成像镜头对场景中的两个(多个)目标分两次(多次)进行成像,将这些成像中清晰部分融合成一幅新的图像,以便于人的观察或计算机的后续处理。
J. Wang, X. Liao, and Z. Yi (Eds.): ISNN 2005, LNCS 3497, pp. 753–758, 2005.© Springer-Verlag Berlin Heidelberg 2005Multifocus Image Fusion Using Spatial Featuresand Support Vector MachineShutao Li 1,2 and Yaonan Wang 11 College of Electrical and Information Engineering, Hunan UniversityChangsha, Hunan 410082, China 2 National Laboratory on Machine Perception, Peking University, Beijing 100871, China *******************.cnAbstract. This paper describes an application of support vector machine to pixel-level multifocus image fusion problem based on the use of spatial features of image blocks. The algorithm first decomposes the source images into blocks. Given two of these blocks (one from each source image), a SVM is trained to determine which one is clearer. Fusion then proceeds by selecting the clearer block in constructing the final image. Experimental results show that the pro-posed method outperforms the discrete wavelet transform based approach, par-ticularly when there is movement in the objects or misegistration of the source images.1 IntroductionOptical lenses often suffer from the problem of limited depth of field. Consequently, the image obtained will not be in focus everywhere. A possible way to alleviate this problem is by image fusion [1], in which several pictures with different focus parts are combined to form a single image. This fused image will then hopefully contain all relevant objects in focus.In recent years, various methods based on multiscale transforms have been pro-posed, including the Laplacian pyramid [2], the gradient pyramid [1], the ratio of Low pass pyramid [3] and the morphological pyramid [4]. More recently, the discrete wavelet transform (DWT) [5], [6] has also been used. In general, DWT is superior to the previous pyramid based methods [6]. While these methods often perform satisfac-torily, their multiresolution decompositions and consequently the fusion results are not shift invariant because of an underlying down sampling process. When there is slight camera/object movement or when there is misregistration of the source images, their performance will thus quickly deteriorate.In this paper, we propose a pixel level multifocus image fusion method based on the use of spatial features of image blocks and support vector machines (SVM). The implementation is computationally simple and is robust to shift problem. Experimen-tal results show that it outperforms the DWT based method. The rest of this paper is organized as follows. The proposed fusion scheme will be described in Section 2. Experiments will be presented in Section 3, and the last section gives some conclud-ing remarks.754 Shutao Li and Yaonan Wang2 SVM Based Multifocus Image Fusion2.1 Feature ExtractionIn this paper, we extract two measures from each image block to represent its clarity. These are described in detail as follows.2.1.1 Spatial Frequency (SF)Spatial frequency is used to measure the overall activity level of an image [7]. For an N M × image F , with the gray value at pixel position (m ,n ) denoted by F (m ,n ), its spatial frequency is defined as22CF RF SF +=. (1)where RF and CF are the row frequency∑∑==−−=M m N n n m F n m F MN RF 122))1,(),((1, and column frequency∑∑==−−=N n M m n m F n m F MN CF 122)),1(),((1,respectively. 2.1.2 Absolute Central Moment (ACM) [8])(10i p u i ACM I i ∑−=−=.(2)where µis the mean intensity value of the image, and i is the gray level.2.1.3 Demonstration of the Effectiveness of the MeasuresIn this section, we experimentally demonstrate the effectiveness of the two focus features. An image block of size 64h 64 (Fig. 2(a)) is extracted from the “Lena” im-age. Fig. 2(b) to Fig. 2(e) show the degraded versions by blurring with a Gaussian filter of radius 0.5, 0.8, 1.0 and 1.5 respectively. As can be seen from Table 1, when the image becomes more blurred, the two features are monotonic accordingly. These results suggest that both two features can be used to reflect image clarity.2.2 The Fusion AlgorithmFig.2 shows a schematic diagram of the proposed multifocus image fusion method. Here, we consider the processing of just two source images, though the algorithm can be extended straightforwardly to handle more than two.Multifocus Image Fusion Using Spatial Features and Support Vector Machine 755(a) original region (b) radius=0.5 (c) radius=0.8 (d) radius=1.0 (e) radius=1.5Fig. 1. Original and blurred regions of an image block extracted from “Lena”Table 1. Feature values for the image regions in Fig.1.Fig. 1(a) Fig. 1(b) Fig. 1(c) Fig. 1(d) Fig. 1(e) SF 40.88 20.61 16.65 14.54 11.70 ACM 51.86 48.17 47.10 46.35 44.96Fig. 2. Schematic diagram of the proposed fusion method.In detail, the algorithm consists of the following steps:1. Decompose the two source images A and B into blocks with size of M×N. De-note the i th image block pair by Aiand Birespectively.2. From each image block, extract two features above described that reflect its clar-ity. Denote the feature vectors for Aiand Biby (iiAAACMSF,) and (iiBBACMSF,)re-spectively.3. Train a SVM to determine whether Aior Biis clearer. The difference vector),(iiiiBABAACMACMSFSF−− is used as input, and the output is labeled accordingto=otherwiseanclearer thisif1target iiiBA. (3)4. Perform testing of the trained SVM on all image block pairs obtained in Step 1.The i th block, Zi, of the fused image is then constructed as>=otherwise5.0ifiiii BoutAZ. (4)where outiis the SVM output using the i th image block pair as input.756 Shutao Li and Yaonan Wang5. Verify the fusion result obtained in Step 4. Specifically, if the SVM decides thata particular block is to come from A but with the majority of its surrounding blocks from B, this block will be switched to come from B. In the implementation, a majority filter with a 3×3 window is used.3 ExperimentsExperiment is performed on 256 level source images shown in Fig.3(a) and (b). Their sizes are 512h512. The true gray value of the reference image is not available and so only a subjective visual comparison is intended here. Image blocks of size 32h32 are used. Two pairs of regions in Fig.3(a),(b), each containing 18 image block pairs, are selected as training set. In 9 of these block pairs, the first image is clearer than the second image, and the reverse is true for the remaining 9 pairs. The two spatial fea-tures are extracted and normalized to the range [0,1] before feeding into SVM. In the experiment, linear kernel is used.For comparison purposes, we also perform fusion using the DWT. The wavelet ba-sis “db8”, together with a decomposition level of 5, is used. Similar to [6], we employ a region based activity measurement for the active level of the decomposed wavelet coefficients, a maximum selection rule for coefficient combination, together with a window based consistency verification scheme.(a) focus on the ‘Pepsi’ can. (b) focus on the testing card.(c) fused image using DWT (db8, level=5). (d) fused image using SVM.Fig. 3. The “Pepsi” source images and fusion results. The training set is selected from regions marked by the rectangles in Fig. 3(a) and Fig. 3(b).Multifocus Image Fusion Using Spatial Features and Support Vector Machine 757(a)difference between the fused image usingDWT (Fig.3(c)) and source image Fig.3(a). (b)difference between the fused image using DWT (Fig.3(c)) and source image Fig.3(b).(c) difference between the fused image usingSVM (Fig. 3(d)) and source image Fig.3(a). (d)difference between the fused image using SVM (Fig.3(d)) and source image Fig. 3(b). Fig. 4. Differences between the fused images in Fig.3(c),(d) and source images in Fig.3(a),(b). Fusion results on using DWT and SVM are shown in Fig. 3(c),(d). Take the “Pepsi” images as an example. Recall that the focus in Fig. 3(a) is on the Pepsi can while that in Fig. 3(b) is on the testing card. It can be seen from Fig.3(d) that the fused image produced by SVM is basically a combination of the good focus can and the good focus board. In comparison, the result by DWT shown in Fig.3(c) is much infe-rior. Clearer comparisons of their performance can be made by examining the differ-ences between the fused images and each source image (Fig.4).4 ConclusionsIn this paper, we proposed a method for pixel level multifocus image fusion by using the spatial features of image blocks and SVM. Features indicating the clarity of an image block, namely, spatial frequency and absolute central moment, are extracted and fed into the support vector machine, which then learns to determine which source image is clearer at that particular physical location. Experimental results show that this method outperforms the DWT based approach, particularly when there is object movement or registration problems in the source images.758 Shutao Li and Yaonan WangAcknowledgementsThis work is supported by the National Natural Science Foundation of China (No.60402024).References1. Burt, P.J., Kolczynski, R.J.: Enhanced Image Capture through Fusion. In Proc. of the 4thInter. Conf. on Computer Vision, Berlin (1993) 173-1822. Burt, P.J., Andelson, E.H.: The Laplacian Pyramid as a Compact Image Code. IEEE Trans.Comm., 31 (1983) 532-5403. Toet, A., Ruyven, L.J., Valeton, J.M.: Merging Thermal and Visual Images by a ContrastPyramid. Optic. Eng., 28 (1989) 789-7924. Matsopoulos, G.K., Marshall, S., Brunt, J.N.H.: Multiresolution Morphological Fusion ofMR and CT Images of the Human Brain. Proc. of IEE: Vision, Image and Signal, 141 (1994) 137-1425. Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor Image Fusion using the Wavelet Trans-form. Graph. Models Image Proc., 57 (1995) 235-2456. Zhang, Z., Blum, R.S.: A Categorization of Multiscale-Decomposition-Based Image FusionSchemes with a Performance Study for a Digital Camera Application. Proc. of the IEEE, 87 (1999) 1315-13257. Eskicioglu, A.M., Fisher, P.S.: Image Quality Measures and Their Performance. IEEE Trans.Comm., 43 (1995) 2959-29658. Shirvaikar, M.V.: An Optimal Measure for Camera Focus and Exposure. 36th IEEE South-eastern Symp. on Sys. Theory, Atlanta (2004) 472-475。
基于可控滤波器和空间频率的图像融合算法郭峰;杨静;史健芳【摘要】A multi-focus image fusion algorithm based on steerable filters and spatial frequency was proposed by researching of the pixel-level image fusion.Steerable filters were used to filter the original image to obtain the oriented analytic image in diffe-rent directions.To obtain the oriented analytic spatial frequency map,the local spatial frequency of each oriented analytic image was calculated and compared.The fusion result image was generated with pixels which were selected form original images ac-cording to the fusion rule.Experimental results show that,compared with the results of other fusion algorithms,the proposed algorithm has certain advantages in the subj ective evaluation and obj ective evaluation indicators.%对像素级图像融合进行研究,提出一种基于可控滤波器和空间频率的多聚焦图像融合算法。
利用可控滤波器处理源图像,得到源图像在不同方向上的解析图像,计算并比较方向解析图像的局部空间频率,得到源图像的方向解析空间频率谱,通过融合规则选取源图像中的像素构成融合图像。
基于分块的不可分小波多聚焦图像融合刘维杰1,刘 斌2,彭嘉雄3(1. 武汉大学计算机学院,武汉 430079;2. 湖北大学数学与计算机科学学院,武汉 430062;3. 华中科技大学图像识别与人工智能研究所,武汉 430074)摘 要:为解决现有图像融合方法存在均方根误差较大、熵值和空间频率较小的问题,提出一种基于分块的不可分小波多聚焦图像融合方法。
该方法利用不可分小波滤波器组对原图像进行多尺度分解,选取特征较明显(方差大)的块作为融合子图像的组成块,对融合块图像做不可分小波逆变换后形成融合图像。
实验结果表明,相比其他融合方法,该方法能消除块痕迹、节约运算量,具有更好的融合效果。
关键词:多聚焦图像融合;不可分小波;均方根误差Multi-focus Image Fusion of Nonseparable WaveletBased on BlockingLIU Wei-jie 1, LIU Bin 2, PENG Jia-xiong 3(1. School of Computer, Wuhan University, Wuhan 430079, China; 2. School of Mathematics and Computer Science,Hubei University, Wuhan 430062, China; 3. Institute of Image Recognition and Artificial intelligence,Huazhong University of Science and Technology, Wuhan 430074, China)【Abstract 】In order to solve the problems that the existing image fusion methods have bigger Root Mean Square Error(RMSE) and smaller entropy and spatial frequency, this paper presents a multi-focus image fusion of nonseparable wavelet based on blocking. The sources images are decomposed using the nonseparable wavelet the filter bank. Then the subimages are segmented into blocks, and these blocks are fused by selecting the bigger value of the variance of the block. The inverse wavelet transform is carried out to produce the fused image. Experimental result shows that the fusion performance of the method is better than the other fusion methods. It can eliminate the blocking artifacts of the fused images and save the time of fusion.【Key words 】multi-focus image fusion; nonseparable wavelet; Root Mean Square Error(RMSE) DOI: 10.3969/j.issn.1000-3428.2011.02.071计 算 机 工 程 Computer Engineering 第37卷 第2期V ol.37 No.2 2011年1月January 2011·图形图像处理· 文章编号:1000—3428(2011)02—0205—02文献标识码:A中图分类号:TP3911 概述近年来,图像融合在军事领域和非军事领域如遥感图像、医学图像、机器视觉上得到了广泛应用[1]。
A New Multi-focus Image Fusion Method Using PrincipalComponent Analysis in Shearlet DomainBiswajit BiswasDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,India biswajit.cu.08@Ritamshirsa ChoudhuriDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,Indiaritam.shirsa@Kashi Nath DeyDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,Indiakndey55@ Amlan ChakrabartiAKCSITUniversity Of Calcutta,Kolkata,Indiaacakcs@caluniv.ac.inABSTRACTThe multi-focus image fusion used to produce a single im-age where the entire view is focused by combining multi-ple images taken with different focus distances.Here we present a concept for multi-focus image fusion using Princi-pal component analysis(PCA)method on shearlet domain. Our proposed concept works on two folds,i)transform the source image into shearlet-image by using shearlet transform (ST),ii)use of PCA model in low-pass sub-band by which the best pixels in smooth parts are selected according to their arrangement.The composition of different high-pass sub-band coefficients achieved by the ST decomposition are realized.Then,the resultant fusion image is reconstructed by performing the inverse shearlet transform(IST).The ex-perimental results,show that our proposed technique can tender enhanced fusion results than some existing methods in the state-of-the-art.This comparative assessment done in the lights of qualitative and quantitative measurements in terms of mutual information and fusion matrix Q AB/F. Categories and Subject Descriptors1.4[Computing Methodologies]:Image Processing—Im-age FusionKeywordsMulti-focus image fusion,shearlet transform,Principal com-ponent analysis,mutual information1.INTRODUCTIONImage fusion is the process of combining information from Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.Request permissions from Permissions@. PerMin’15February26-27,2015,Kolkata,West Bengal,India Copyright2015ACM978-1-4503-2002-3/15/02...$15.00/10.1145/2708463.2709064.two or more images of a scene into a single composite im-age that is more informative and is more suitable for visual perception or computer processing[1].This technology has been widely used in the different areas such as military,re-mote sensing,medicine etc[2].Now-a-days,many image fusion techniques have been proposed to fuse multi-focus images[8,9,11,12].In general,these fusion techniques can be categorized into two classes:i)spatial domain methods ii)transform domain methods[1,2].In spatial domain methods,images can directly fuse inten-sity values of pixels from the input images.The simplest fu-sion method of this class is to compute the average of all the input images,i.e.‘Average scheme’,but average scheme is often suffers sharpness and contrast degradation problem[3]. An improvement over this baseline method is to use for the sharper pixels at each pixel location over all the input im-ages to compose the fused image.This method requires an evaluation metric,such as variance[2],energy of image gra-dient[3],energy of Laplacian[3,5],to establish what amount of a pixel be fused.However,these pixel-based methods e.g.Principle component analysis(PCA),Intensity-Hue-Saturation(IHS),etc.,the presence of noise can cause in-accurate measurement of sharpness and degrade fusion per-formance,while sharpness is evaluated locally around each pixel[3].To resolve the noise issue,block-based methods have been proposed[8].For block based methods,the input images arefirst divided into blocks/regions,the fused image is then composed by selecting the sharper blocks/regions from the input images[12].These block-based methods may suffer that the artifacts can be appear at the boundary of blocks which greatly reduce the quality of the fused image. These limitations of spatial domain methods handled on transform domain methods.The Wavelet transform,Lapla-cian pyramid or Curvelet transform,are pioneering methods of transform domain.These methods produce better results than any spatial domain methods.In recent years,wavelet transform for time-frequency localization,multi-scale char-acteristic and spare representation of target function with point singularity is widely used in image processing and achieved good effect[6,7,10].However,for images con-tained higher dimension singularity,wavelet transform can-not achieve the optimal spare approach[13,14].Thus,toresolve the such limitation,wavelet transform represents im-ages,multi-scale geometric analysis theory is developed and proposed a series of new multi-scale geometric transform method,such as,ridgelet [13],curvelet [13],contourlet [14].Presently,many author have developed fundamental theory of shearlet transform over the affine system that is provided the complete characteristic of analysis and synthesis.The principal component analysis (PCA)is a popular method for feature extraction and dimension reduction and is employed for image fusion [2].Usually,principal compo-nent analysis is a mathematical that transform a number of correlated variables into a several uncorrelated variables.PCA is widely used in data classification [4].We propose image fusion algorithm by combining shearlet transform and PCA techniques and carry out the quality analysis of pro-posed fusion algorithm on set of benchmark Multi-focus im-age.The fusion using PCA is achieved by weighted sum of source images.The weights for each source image are ob-tained from the normalized Eigen vector of the covariance matrices of each source image.Experimental results show that the proposed algorithm provides better results that help to facilitate more accurate analysis of multi-focus images.Our contributions are as follows:•Multi-focus image fusion algorithm using PCA in shear-let domain have been proposed.•In quality analysis,proposed fusion algorithm com-pared with five popular fusion schemes and illustrated that proposed approach more efficient for Multi-focus images.The rest of the paper is organized as follows.in section 2,Shearlet transform have been discussed in brief and Our proposed method is presented in Section 3and the exper-imental results are illustrated in Section 4.In section 5,furnished the conclusion of this paper.2.SHEARLET TRANSFORMConventionally,the shearlet transform was developed based on an affine system with composite dilations [15].The shift-invariant shearlet transform primarily consists of two steps:multi-scale decomposition and directional localization.To save space,we like better to refer the readers to the litera-tures [15,16]for more details.Let us briefly described the continuous and discrete shearlet transform at fixed resolu-tion level j on later subsections:2.1Continuous Shearlet SystemsIn dimension n =2,the affine systems with composite dilations are the collections of the form:ΨP Q (ψ)={ψj,k,l (X )(1)=|det P |j/2ψQ l P jX −k:j,l ∈Z ,k ∈Z 2}where,P,Q are 2×2invertible matrices,f ∈L 2 R 2,conventionally defined as follows:j,k,l| f,ψj,k,l |2=||f ||2(2)The elements of this method are called composite wavelets if ΨP Q (ψ)forms a Parseval frame or tight frame for f ∈L 2 R 2 ,In this system,the dilations matrices P j are relatedwith scale transformations,while the matrices Q l are related with area-preserving geometric transformations,such as ro-tations and shear.The definition of continuous shearlet transform,usually we have,ψa,s,k (x )=a −3/4ψ U −1s V −1a (x −k )(3)Where V a =a 0√a ,U s = 1001 ,ψ∈L 2 R 2 ,following conditions :1.ˆψ( )=ˆψ( 1, 2)=ˆψ1( 1)ˆψ2( 2/ 1);2.ˆψ1∈C ∞(R )supp ψ1⊂[−2,−1/2] [1/2,2],where,ψ1is continuous wavelet;3.ˆψ2∈C ∞(R )supp ψ2⊂[−1,1],ˆψ2>0But ψ2 =1For ψa,s,k ,a ∈R +,s ∈R +and k ∈R +,for any f ∈L 2(R 2),is called shearlet [15],a collection of wavelets with different scales.Here,the anisotropic expansion matrix U s is associated with the scale transform and the shear matrix V a denotes the geometric transformation.Generally,a =4and s =1.Where a,s,k are denote with scale transformations,the shear direction and the translation vector,respectively.2.2The Discrete Shearlet TransformThe process of the discrete shear transformation can be divided into two steps:i)multi-scale subdivision ii)direction localization [15].The figure 1illustrates the decomposition process with the shearlets and the figure 4illustrate shearlet transform of an image (‘Book’).In this process,at any scale j,let f ∈L (Z 2N ).Firstly the Laplacian pyramid method is used to decompose a im-age f j −1a into low-pass image f ja and a high-pass image f j hwith N j =N j −1/4,where f j −1a ∈L (Z 2N j −1),f j a ∈L (Z 2N j )and f j h ∈L (Z 2N j −1).After decomposition,estimated ˆf j b on a pseudo-polar grid with the sub-sequent one-dimensional band pass filter with respect to the signal components,thatgenerate a special matrix D ˆf j b.Then used a band-pass filter on matrix D ˆf j b to reconstruct the Cartesian sampled val-ues and finally performed the inverse two-dimensional FastFourier Transform (FFT)for reconstruct the image.Figure 2illustrate the structure of the frequency tiling provide by the shearlet transform [15].3.THE PROPOSED FUSION ALGORITHMIn this paper,we develop a novel multi-focus image fusionscheme to share the merits of ST and PCA technique.For simplicity,we name this fusion algorithm as ST-PCA fusion algorithm.The main steps of the proposed ST-PCA fusion algorithm is shown in Figure.3.The proposed fusion scheme (ST-PCA)consists two pro-cessing parts:The low frequency coefficients of the original image manipulated by using PCA and integrated them with average method.The high frequency coefficients updated by using an adaptive parameter that is derived from high-pass sub-bands of same and different levels.To summarize the necessary process are described as follows:3.1PCA transform fusion approachPrincipal component analysis is transforms a number of correlated variables into a several uncorrelated variables [5].Figure 1:The figure shows the order of decomposition and directionalfiltering.(a)(b)Figure 2:The structure of the frequency tiling by the shear-let:(a)The tiling of the frequency plane R 2induced by the shearlet,(b)The size of the frequency support of a shearlet ψj,l,k..Figure 3:Schematic diagram of ST-PCA-based fusion algo-rithm.Mostly,the image fusion using PCA is achieved by weighted factor of source images.The weights for each source im-age are obtained from the normalized Eigen vector of the covariance matrices of each source image [2,5].The concept and method of this way is explained in detail in references [2,5],the basics of PCA fusion as follows:•First,the Multi-focus image is transformed with PCA technique and the Eigen values and correspondingEigen-(a)Originalimage (b)Shearletcoefficients(c)Shearlet (d)Re-constructedFigure 4:Illustration of Shearlet transform on Benchmark image Book:(a)Original ‘Book’image,(b)Shearlet coeffi-cients,(c)Shearlet,and (d)Re-constructed ‘Book’image.vectors of correlation matrix between the Multi-focus source images.However,the individual bands is prin-ciple components of derived matrix [4].•Second,the Multi-focus images are matched by the estimated principle components and using as weighted average for individual bands.•Finally,the first band of the first Multi-focus image is multiply with the first principle component and first band of the second Multi-focus image is multiply with the second principle component and finally,for integra-tion of low-pass sub-bands inverse shearlet transform have been done.3.2Evaluation of low frequency sub-band co-efficientsConventionally,the shearlet low-pass sub-bands efficiently represents the approximation of an original image [15].In this paper,the section scheme based on threshold has been used to produce the best fused low-pass coefficients.As,local image features have a highly related with the ST co-efficients,and its corresponding neighborhoods [15].Also,the threshold is determined from the ST coefficients,and its neighborhood.Furthermore,the local disparity reveals of shearlet coefficients is mostly related with the clarity of fused image.Thus,to obtain better fused result,here prin-cipal ST coefficients are selected and combined by principal component analysis (PCA)method.We can summarize the overall procedure as follows:Let L g (i,j )denote the low-pass sub-band located at (i,j ),g =X,Y .The fused low-pass coefficients L F (i,j )are achievedby the following calculation:L F (i,j )=αL X (i,j )+βL Y (i,j )(4)where α,β,α+β=1are two parameters,which are deter-mine via PCA method as follows:1.Initially,we estimate covariance matrix C (i,j )from L g (i,j )and calculate Eigen value V and vector D from matrix C (i,j )successively [4].ter,we have [V,D ]=eigen(C (i,j )),thus if D (i,i )≥D (i +1,i +1),then normalized the i th column of eigen value matrix V otherwise normalized the (i +1)th col-umn of eigen value matrix V .For i =1,we can write as V (:,i )/ n i =1V (:,i )and V (:,i +1)/ ni =1V (:,i +1)respectively.3.Finally,first element of normalized Eigen vale matrix V represents by αand second value by β.that are α=V (:,i )/n i =1V (:,i )and β=V (:,i +1)/ n i =1V (:,i +1)respectively.3.3Evaluation of high frequency sub-band co-efficientsHigh-pass sub-bands of ST provides the details informa-tion of the image,such as edge,corner and etc.[16].To enhance the performance of image fusion method,a new de-cision mapping scheme has been proposed and integrated with shearlets high-pass sub-bands.Let H l,kg (i,j )be the high-pass coefficient at the location (i,j )in the l th sub-band at the k th level,g =X,Y .In short,the process of high-pass shearlet sub-bands by proposed fu-sion method ST-PCA is as follows:For the current sub-band H l,kg of the horizontal levels,letR g,h is the total of H l,kg and the other horizontal sub-bands H m,k g in same level k .It is computed by:R g,h=K k =1L l =1| H l −1,k g −H l,kg |(5)where |·|is absolute distance between two consecutive levels.Similarly,for vertical level,let R g,v is the sum of H l,kgand all the other vertical high-pass sub-bands H m,ng in the different high-pass levels.We evaluated as follows:R g,v =K l,m =1L k,n =1|H l,kg−H m,n g|(6)To determine the horizontal ζg,h and the vertical ζg,v high-pass sub-band dependency factor for the present high-passshearlet sub-band H l,kg ,we performed:ζg,h =R g,hG,h g,v (7)ζg,v =R g,v(R g,h +R g,v )(8)The parameter ζg,h is a relationship between H l,kg and other neighbor high-pass sub-bands in the same horizontal plane,and ζg,v relationship between H l,kg in the different vertical plane with its corresponding neighbor sub-bands.Table 1:Statistical result analysis of ST-PCA Performance metric‘Book ’‘Clock’‘Pepsi ’MI 8.2478.0358.011QAB/F 0.7030.7310.771Finally,to derived the new coefficients H l,kg,new from oldshearlet high-pass sub-bands H l,kg by the factors ζg,h and ζg,v ,we evaluated as follow:H l,k g,new =H l,k g ×1+ζ2g,h +ζ2g,v (9)However,the fused coefficients H l,kF (i,j )are obtained by the following estimation:H F (i,j )=H l,k X,new (i,j )if H l,k X,new ≥H l,k Y,newH l,kY,new (i,j )otherwise(10)The procedure of the ST-PCA fusion algorithm is sum-marized as :•The images to be fused must be registered.•These source images are decomposed by ST transformed.•Best low-pass sub-band are estimated by PCA usingEq.4and the final fused image are obtained via IST.•Largest high-pass shearlet coefficients are selected by Eqs.9,10and fused by proposed fusion rules.•Largest directional shearlet sub-bands is selected.•The fused image is reconstructed by IST.4.EXPERIMENTAL ANALYSISIn this section,we assessed the performance of the pro-posed multi-focus image fusion technique (ST-PCA).The proposed ST-PCA and other selected fusion algorithms are analyzed on a set of benchmark image.Here,three of them are presented to demonstrate the performance of the algo-rithm and two of them are shown the comparative analysis.The benchmark image sets are available at as shown in figure.5.In our experiments,we compared the performance of proposed ST-PCA method with five differ-ent fusion schemes such as (1)Principal component analysis (PCA)[2,4](2)Laplacian pyramid technique (LPT)[5](3)Discrete wavelet transform (DWT)[6],(4)Curvelet trans-form (CVT)[13]and (5)Non-sub-sampled counterlet trans-form (NCST)[14]respectively.Our experimental platform is MATLAB R2010b in the PC Intel Quad-Core i33.2GHz CPU and 8GB memory.Here we have selected two most popular fusion metrics such as Mutual Information (MI)[17]and Q AB/F [18],to evaluate the performance of multi-focus fusion.The MI determines how much information is achieved from the in-put images.This is accomplished by accumulating the mu-tual information of the fused image with each of the input images [17].On the other hand,the performance metric,Q AB/F [18],is based on the assumption that the perceptu-ally important information of an image is the high-frequency edge details.This metric compute how much edge informa-tion is transferred from the input images to the fused image.Table2:Statistical results of comparative assessments of various fusion schemesFusion Method Performance metric‘Clock’‘Pepsi’PCA MI7.1277.301Q AB/F0.7080.719 LPT MI7.4187.325Q AB/F0.7120.722 DWT MI7.5077.480Q AB/F0.7190.732 CVT MI7.6847.541Q AB/F0.7200.740 NSCT MI7.9807.897Q AB/F0.7240.759 ST-PCA MI8.0358.011Q AB/F0.7310.771 The experimental results demonstrate that the focus in the source images are properly fused in thefinal images. Figure1show three fusion results andfigure2showfive fused images obtained by different fusion algorithms on the ‘Pepsi’and‘Clock’image sets.The results of MI and Q AB/F are shown in Table1and Table2.Note that the highest score in each row of Tables2is displayed in bold.From Table1and Table2,it can be observed that the ST-PCA method consistently outperforms the otherfive methods in both evaluation metrics on images setfigure5.In addition to the objective evaluation,we also performed a visual com-parison on image set Pepsi as shown infigure6,and Clock as shown infigure6,.Infigures.??,6are illustrated the re-sultant fused images obtained from the PCA[2,5],LPT[5], DWT[6],CT[13],NCST[14],and ST-PCA respectively. From Tables1and2,we observed that the ST-PCA is bet-ter to the otherfive ones in terms of all the performance metrics,as a result the ST-PCA superior than others.Fur-thermore,we also observed that the quality score of the proposed algorithm by MI and Q AB/F are much higher than the ones of other fusion methods,the fact is that the method is more efficient to preserves the salient information of im-age.From the above experiments,we concluded that the proposed ST-PCA persistently outperforms the otherfive methods in image quality measures.5.CONCLUSIONThe Shearlet provides better analytical information about directionality,localization,anisotropy and multi-scale than traditional multi-scale analysis.In this paper,we propose a new multi-focus image fusion algorithm based on shift-invariant shearlet and PCA or ST-PCA model.The main advantage of our proposed ST-PCA method is to determined optimal maps of focus depth of image which is able to im-prove the focus detection accuracy in smooth regions.Sup-ported by the experimental results,we can conclude that,our ST-PCA method tender better results than many other ex-isting state-of-the-art techniques.6.REFERENCES [1]Anish,A.and Jebaseeli,T.J.A survey on multi-focusimage fusion methods,put.Eng.Technol.(IJARCET).1(8).2012.[2]Konstantinos,parison of nine fusiontechniques for very high resolution data,Photogramm.Eng.Remote Sens.74(5).2008.647-659.[3]Wang,Z.Ziou,D.Armenakis,C.Li,D.and Li,Q.Acomparative analysis of image fusion methods,IEEETrans.Geosci.Remote Sens.43(6).2005.1391-1402. [4]Sun,J.F.Jiang,Y.J.and Zeng,S.Y.A study of PCAimage fusion techniques on remote sensing,Proc.SPIE-5985.2005.739-744.[5]Burt,P.J.The pyramid as a structure for efficientcomputation,in:A.Rosenfeld(Ed.),MultiresolutionImage Processing and Analysis,Springer-Verlag.1984.6-35.[6]Pajares,G.de la Cruz,J.M.A wavelet-based imagefusion tutorial,Pattern Recognit.37(9).2004.1855-1872.[7]Li,H Manjunath,B.S.and Mitra,S.K.Multisensorimage fusion using the wavelet transform,Graph.Models Image Process.57(3).1995.235-245.[8]Li,S.and Yang,B.Multifocus image fusion usingregion segmentation and spatial frequency,Image Vision Comput.26(7),2008.971-979.[9]Aslantas,V.and Kurban,R.Fusion of multi-focusimages using differential evolution algorithm,ExpertSyst.37(12).2010.8861-8870.[10]Tian,J.and Chen,L.Multi-focus image fusion usingwavelet-domain statistics,in:201017th IEEEInternational Conference on Image Processing(ICIP).2010.1205-1208.[11]Zhao,H.Shang,Z.Tang,Y.Y.and Fang,B.Multi-focus image fusion based on the neighbor distance, Pattern Recognit.46(3).2013.1002-1011.[12]Huang,W.and Jing,Z.Evaluation of focus measuresin multi-focus image fusion,Pattern Recognit.Lett.28(4).2007.493-500.[13]Ali,F.El-Dokany,I.Saad,A.and El-Samie F.Acurvelet transform approach for the fusion of MR and CT images,Journal of Modern Optics.57.2010.273-286.[14]Do,M.and Vetterli,M.The contourlet transform:anefficient directional multiresolution image representation, IEEE Trans.Image Process.104(12).2005.2091-2106. [15]Lim,W.Q.The Discrete Shearlet Transform:A NewDirectional Transform and Compactly SupportedShearlet Frames,IEEE TRANSACTIONS ON IMAGE PROCESSING.19(5).2010.1166-1180.[16]Easley,bate,D.and Colonna,F.Shearlet-Based Total Variation Diffusion for Denoising, Image Processing,IEEE Transactions on.18(2).2009.260-268.[17]Qu,G.Zhang,D.and Yan,rmation measurefor performance of image fusion,Electron.Lett.38(7).2002.313-315.[18]Xydeas,C.and Petrovic V.Objective image fusionperformance measure,Electron.Lett.36(4).2000.308-309.(a)Right focus(b)Left focus(c)ST-PCA(d)Right focus(e)Left focus(f)ST-PCA(g)Right focus(h)Left focus(i)ST-PCAFigure5:Multi-focus image fusion results of ST-PCA-based method:(a,d,g)Right Focus.(b,e,h)Left focus.(c,f,i)ST-PCA.(a)PCA(b)LPT(c)DWT(d)CVT(e)NSCT(f)ST-PCA(g)PCA(h)LPT(i)DWT(j)CVT(k)NSCT(l)ST-PCA。