A New Multi-focus Image Fusion Method Using Principal
- 格式:pdf
- 大小:1.90 MB
- 文档页数:7
本科毕业设计论文题目多聚焦图像融合算法研究专业名称学生姓名指导教师毕业时间毕业 任务书一、题目多聚焦图像融合算法研究二、指导思想和目的要求本题目来源于科研,主要研究多聚焦图像的概念,学习多聚焦图像的常用融合算法,进而实现相关算法。
希望通过该毕业设计,学生能达到:1.利用已有的专业知识,培养学生解决实际工程问题的能力;2.锻炼学生的科研工作能力和培养学生团队合作及攻关能力。
三、主要技术指标1.学习多聚焦图像的特点;2.研究多聚焦图像的融合算法;3.实现多聚焦图像的融合。
四、进度和要求第01周----第02周: 参考翻译英文文献;第03周----第04周: 学习多聚焦图像的特点;第05周----第08周: 研究多聚焦图像的融合算法;第09周----第14周: 编写多聚焦图像的融合程序;第15周----第16周: 撰写毕业设计论文,论文答辩。
五、主要参考书及参考资料1.张德丰.MATLAB 数字图像处理[M].北京:机械工业出版社,2012.2. 敬忠良. 图像融合——理论与应用[M].北京:高等教育出版社,2010.3. 郭雷. 图像融合[M]. 北京:电子工业出版社,2011.4. 孙巍. 孙巍. 像素及多聚焦图像融合算法研究[D].长春:吉林大学,2008.5. 马先喜. 多聚焦图像融合算法研究[D].无锡:江南大学,2012.学生 指导教师 系主任 __ __设计论文摘要图像融合是将同一对象的两个或多个图像按一定规则合成为一幅图像。
其关键是抽取每幅源图像中的清晰区域,并将这些清晰区域以一定的规则融合起来,从而生成一幅清晰且信息量完整的融合图像。
多聚焦图像融合的具体目标在于提高图像的空间分辨率、改善图像的几何精度、增强特征显示能力、改善分类精度、替代或修补图像数据的缺陷等。
本文概括了多聚焦图像融合的一些基本概念和相关的基本知识,对DWT分解的层数和方向子带的个数对融合结果的影响进行了初步的研究。
2021574彩色图像灰度化是图像处理和计算机视觉领域的基本课题和重要前提,是将三维通道信息转换为一维灰度数据的过程。
为了节约成本,人们仍使用黑白打印,并且许多出版物的大部分图片是灰度图像。
生活中还存在很多更有艺术效果的黑白图像,由此衍生了灰度图像在艺术美学方面的应用,如中国水墨画渲染、黑白摄影等[1]。
为了减少输入图像的信息量或者减少后续的运算量,都需要将彩色图像进行灰度化处理,其在图像预处理等方面有很多应用,如边缘检测[2-3]、特征提取[4-5]等。
为了使灰度化后的图像更好地保留彩色图像特征,许多方法被相继提出。
根据算法中映射函数是否可应用于整幅图像的所有像素,常见的灰度化算法大致可以分为两类:全局映射法和局部映射法。
在局部映射法中,灰度值随着空间位置而改变,将不同的灰度值赋给相同的颜色以增强灰度图像局部对比度,容易受相邻像素的影响。
2004年,Bala等人[6]将高频色度信息引入亮度通道,局部保留了相邻颜色之间的差异。
Smith等人[7]使用拉普拉斯金字塔提取图像多层特征,根据彩色与灰度图像色对比度比例来调整拉普拉斯各分层灰度值,增强不明显的边缘,进行对比度调整。
卢红阳等人[8]提出一种基于最大加权投影求解的算法,建立最大化的加权局部保留投影模型,提出最大加权投影的目标优化函数。
局部映射法试图找出颜色在三维的局部差异,通过控制像素的亮度值,从而精确地保留图像的局部特征,但无法保证全局颜色的一致性,把一个整体图像转换为非齐次的,最终求出的灰度图像彩色图像多尺度融合灰度化算法顾梅花,王苗苗,李立瑶,冯婧西安工程大学电子信息学院,西安710600摘要:为了使彩色图像灰度化后能够保留更多的原始特征,提出了一种新的基于多尺度图像融合的灰度化算法。
将彩色图像分解为R、G、B三个通道图像,采用基于高斯-拉普拉斯金字塔的多尺度图像融合模型进行灰度化,并引入梯度域导向图像滤波(Gradient Domain Guided Image Filter,GGIF)来消除多尺度融合可能产生的伪影。
一种新的结合SVM和FNN的多聚焦图像融合算法:To deal with the problems of cracks among blocks and the uncertainty of real characteristics of image fusion based on block, this paper proposed a new multifocus image fusion method by combining support vector machine (SVM)wits fuzzy neural network (FNN). Firstly, FCM and SVM were used to obtain the parameters of FNN and the block was divided into clear, blurring and transitional zones based on the FNN. Then the three classified areas were merged with weighting to get the fused multifocus images, where the weight factors were obtained as the defuzzication outputs of the fuzzy neural network. Finally, the qualities of various fusion algorithm were evaluated by the root mean square error(RMSE), the mean absolute error(MAE) and peak signal to noise ratio(PSNR). The experimental results show that the proposed fusion algorithm has good robustness and computing performance, which basically meets the demand of practical image fusion, and the fusion quality evaluations illustrate that our method has an advantage over the existing fusion algorithm.1 引言.图像信息融合是信息融合的一个重要分支,通过对同一场景下多个传感器获取的图像信息进行有机的综合,生成信息量更加全面、精确、完整的图像,以弥补单一传感器获取信息的局限性[1]。
基于GHM多小波变换的非织造布多焦面图像融合陈阳;辛斌杰;邓娜【摘要】针对光学显微镜在单一焦平面下拍摄的织物图像部分区域纤维会模糊的问题,提出基于GHM多小波变换的非织造布多焦面图像融合算法.利用自行搭建的非织造布显微成像系统采集不同焦平面下的织物图像序列,对初始图像序列进行临界采样预滤波处理,使用2种融合方法逐一处理图像的高低频,初始织物融合图像经多小波融合及逆变换后获得,之后按上述方法将初始融合图像与后续单焦面图像融合,叠加循环至融合后所有纤维区域均能清晰显示为止结束收敛.实验结果表明,该融合方法能将不同焦平面下拍摄的图像序列进行数字化图像融合,达到单幅图像内全视野区域的纤维网清晰聚焦融合的效果,为之后的计算机图像处理及测量提供便利.【期刊名称】《纺织学报》【年(卷),期】2019(040)006【总页数】8页(P125-132)【关键词】临界采样预滤波;GHM多小波;多焦面融合;非织造布图像;显微成像【作者】陈阳;辛斌杰;邓娜【作者单位】上海工程技术大学电子电气工程学院,上海 201620;上海工程技术大学服装学院,上海 201620;上海工程技术大学电子电气工程学院,上海 201620【正文语种】中文【中图分类】TP311.1非织造布是由纤维随机或定向排列而成,生产以纺黏法为主。
非织造布的性能与纤维网的孔隙构造紧密相关,而纤维的厚度、宽度、取向度以及纤维网的形成方式等都与其构造有关,因此能够得到这些结构参数进而找到性能间的联系,对生产及用途都具有十分重要的指导意义。
目前主要使用间接法对非织造布孔隙结构进行解析,而其存在的问题集中在费时费力且不能考虑到孔结构的复杂性。
计算机数字图像处理技术的发展为研究非织造布结构和性能提供了有效工具。
图像质量对纤维的形态测量与结构解析至关重要。
非织造布的厚度太大使得一般光学显微镜的景深不足以将所有纤维清晰地显现在一幅图像中。
基于这种不完全聚焦图像的测量,纤维结构将是不准确的,甚至会对后续处理有一定的误导性[1]。
基于Contourlet变换的多聚焦图像融合作者:丁岚来源:《电脑知识与技术》2008年第34期摘要:由于可见光成像系统的聚焦范围有限,很难获得同一场景内所有物体都清晰的图像,多聚焦图像融合技术可有效地解决这一问题。
Contourlet变换具有多尺度多方向性,将其引入图像融合,能够更好地提取原始图像的特征,为融合图像提供更多的信息。
该文提出了一种基于区域统计融合规则的Contourlet变换多聚焦图像融合方法。
先对不同聚焦图像分别进行Contourlet变换,采用低频系数取平均,高频系数根据区域统计值决定的融合规则,再进行反变换得到融合结果。
文中给出了实验结果,并对融合结果进行了分析比较,实验结果表明,该方法能够取得比基于小波变换融合方法更好的融合效果。
关键词:图像融合;Contourlet变换;小波变换中图分类号:TP391文献标识码:A文章编号:1009-3044(2008)34-1700-03Multifocus Image Fusion Based on Contorlet TransformDING Lan(College of Information Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China)Abstract: Due to the limited depth-of-focus of optical lenses , it is often difficult to get an image that contains all relevant objects in focus. Multifocus image fusion method can solve this problem effectively. Contoulet transform has varying directions and multiple scales. When the contourlet transform is introduced to image fusion , the characteristics of original images are taken better and more information for fusion is obtained. A new multifocus image fusion method is proposed in this paper, based on contourlet transform with the fusion rule of region statistics. Different focus images are decomposed using contourlet transform firstly, then low-bands are integrated using the weighted average , high-bands are integrated using region statistics rule. Then the fused image will be obtained by inverse contourlet transform. The experimental results are showed, and compared with the method based on wavelet transform. Experiments show that this approach can achieve better results than the method based on wavelet transform.Key words: image fusion; contourlet transform; wavelet transform1 引言对于可见光成像系统来讲,由于成像系统的聚焦范围有限,场景中的所有目标很难同时都成像清晰,这一问题可以采用多聚焦图像融合技术来解决,即用同一成像镜头对场景中的两个(多个)目标分两次(多次)进行成像,将这些成像中清晰部分融合成一幅新的图像,以便于人的观察或计算机的后续处理。
J. Wang, X. Liao, and Z. Yi (Eds.): ISNN 2005, LNCS 3497, pp. 753–758, 2005.© Springer-Verlag Berlin Heidelberg 2005Multifocus Image Fusion Using Spatial Featuresand Support Vector MachineShutao Li 1,2 and Yaonan Wang 11 College of Electrical and Information Engineering, Hunan UniversityChangsha, Hunan 410082, China 2 National Laboratory on Machine Perception, Peking University, Beijing 100871, China *******************.cnAbstract. This paper describes an application of support vector machine to pixel-level multifocus image fusion problem based on the use of spatial features of image blocks. The algorithm first decomposes the source images into blocks. Given two of these blocks (one from each source image), a SVM is trained to determine which one is clearer. Fusion then proceeds by selecting the clearer block in constructing the final image. Experimental results show that the pro-posed method outperforms the discrete wavelet transform based approach, par-ticularly when there is movement in the objects or misegistration of the source images.1 IntroductionOptical lenses often suffer from the problem of limited depth of field. Consequently, the image obtained will not be in focus everywhere. A possible way to alleviate this problem is by image fusion [1], in which several pictures with different focus parts are combined to form a single image. This fused image will then hopefully contain all relevant objects in focus.In recent years, various methods based on multiscale transforms have been pro-posed, including the Laplacian pyramid [2], the gradient pyramid [1], the ratio of Low pass pyramid [3] and the morphological pyramid [4]. More recently, the discrete wavelet transform (DWT) [5], [6] has also been used. In general, DWT is superior to the previous pyramid based methods [6]. While these methods often perform satisfac-torily, their multiresolution decompositions and consequently the fusion results are not shift invariant because of an underlying down sampling process. When there is slight camera/object movement or when there is misregistration of the source images, their performance will thus quickly deteriorate.In this paper, we propose a pixel level multifocus image fusion method based on the use of spatial features of image blocks and support vector machines (SVM). The implementation is computationally simple and is robust to shift problem. Experimen-tal results show that it outperforms the DWT based method. The rest of this paper is organized as follows. The proposed fusion scheme will be described in Section 2. Experiments will be presented in Section 3, and the last section gives some conclud-ing remarks.754 Shutao Li and Yaonan Wang2 SVM Based Multifocus Image Fusion2.1 Feature ExtractionIn this paper, we extract two measures from each image block to represent its clarity. These are described in detail as follows.2.1.1 Spatial Frequency (SF)Spatial frequency is used to measure the overall activity level of an image [7]. For an N M × image F , with the gray value at pixel position (m ,n ) denoted by F (m ,n ), its spatial frequency is defined as22CF RF SF +=. (1)where RF and CF are the row frequency∑∑==−−=M m N n n m F n m F MN RF 122))1,(),((1, and column frequency∑∑==−−=N n M m n m F n m F MN CF 122)),1(),((1,respectively. 2.1.2 Absolute Central Moment (ACM) [8])(10i p u i ACM I i ∑−=−=.(2)where µis the mean intensity value of the image, and i is the gray level.2.1.3 Demonstration of the Effectiveness of the MeasuresIn this section, we experimentally demonstrate the effectiveness of the two focus features. An image block of size 64h 64 (Fig. 2(a)) is extracted from the “Lena” im-age. Fig. 2(b) to Fig. 2(e) show the degraded versions by blurring with a Gaussian filter of radius 0.5, 0.8, 1.0 and 1.5 respectively. As can be seen from Table 1, when the image becomes more blurred, the two features are monotonic accordingly. These results suggest that both two features can be used to reflect image clarity.2.2 The Fusion AlgorithmFig.2 shows a schematic diagram of the proposed multifocus image fusion method. Here, we consider the processing of just two source images, though the algorithm can be extended straightforwardly to handle more than two.Multifocus Image Fusion Using Spatial Features and Support Vector Machine 755(a) original region (b) radius=0.5 (c) radius=0.8 (d) radius=1.0 (e) radius=1.5Fig. 1. Original and blurred regions of an image block extracted from “Lena”Table 1. Feature values for the image regions in Fig.1.Fig. 1(a) Fig. 1(b) Fig. 1(c) Fig. 1(d) Fig. 1(e) SF 40.88 20.61 16.65 14.54 11.70 ACM 51.86 48.17 47.10 46.35 44.96Fig. 2. Schematic diagram of the proposed fusion method.In detail, the algorithm consists of the following steps:1. Decompose the two source images A and B into blocks with size of M×N. De-note the i th image block pair by Aiand Birespectively.2. From each image block, extract two features above described that reflect its clar-ity. Denote the feature vectors for Aiand Biby (iiAAACMSF,) and (iiBBACMSF,)re-spectively.3. Train a SVM to determine whether Aior Biis clearer. The difference vector),(iiiiBABAACMACMSFSF−− is used as input, and the output is labeled accordingto=otherwiseanclearer thisif1target iiiBA. (3)4. Perform testing of the trained SVM on all image block pairs obtained in Step 1.The i th block, Zi, of the fused image is then constructed as>=otherwise5.0ifiiii BoutAZ. (4)where outiis the SVM output using the i th image block pair as input.756 Shutao Li and Yaonan Wang5. Verify the fusion result obtained in Step 4. Specifically, if the SVM decides thata particular block is to come from A but with the majority of its surrounding blocks from B, this block will be switched to come from B. In the implementation, a majority filter with a 3×3 window is used.3 ExperimentsExperiment is performed on 256 level source images shown in Fig.3(a) and (b). Their sizes are 512h512. The true gray value of the reference image is not available and so only a subjective visual comparison is intended here. Image blocks of size 32h32 are used. Two pairs of regions in Fig.3(a),(b), each containing 18 image block pairs, are selected as training set. In 9 of these block pairs, the first image is clearer than the second image, and the reverse is true for the remaining 9 pairs. The two spatial fea-tures are extracted and normalized to the range [0,1] before feeding into SVM. In the experiment, linear kernel is used.For comparison purposes, we also perform fusion using the DWT. The wavelet ba-sis “db8”, together with a decomposition level of 5, is used. Similar to [6], we employ a region based activity measurement for the active level of the decomposed wavelet coefficients, a maximum selection rule for coefficient combination, together with a window based consistency verification scheme.(a) focus on the ‘Pepsi’ can. (b) focus on the testing card.(c) fused image using DWT (db8, level=5). (d) fused image using SVM.Fig. 3. The “Pepsi” source images and fusion results. The training set is selected from regions marked by the rectangles in Fig. 3(a) and Fig. 3(b).Multifocus Image Fusion Using Spatial Features and Support Vector Machine 757(a)difference between the fused image usingDWT (Fig.3(c)) and source image Fig.3(a). (b)difference between the fused image using DWT (Fig.3(c)) and source image Fig.3(b).(c) difference between the fused image usingSVM (Fig. 3(d)) and source image Fig.3(a). (d)difference between the fused image using SVM (Fig.3(d)) and source image Fig. 3(b). Fig. 4. Differences between the fused images in Fig.3(c),(d) and source images in Fig.3(a),(b). Fusion results on using DWT and SVM are shown in Fig. 3(c),(d). Take the “Pepsi” images as an example. Recall that the focus in Fig. 3(a) is on the Pepsi can while that in Fig. 3(b) is on the testing card. It can be seen from Fig.3(d) that the fused image produced by SVM is basically a combination of the good focus can and the good focus board. In comparison, the result by DWT shown in Fig.3(c) is much infe-rior. Clearer comparisons of their performance can be made by examining the differ-ences between the fused images and each source image (Fig.4).4 ConclusionsIn this paper, we proposed a method for pixel level multifocus image fusion by using the spatial features of image blocks and SVM. Features indicating the clarity of an image block, namely, spatial frequency and absolute central moment, are extracted and fed into the support vector machine, which then learns to determine which source image is clearer at that particular physical location. Experimental results show that this method outperforms the DWT based approach, particularly when there is object movement or registration problems in the source images.758 Shutao Li and Yaonan WangAcknowledgementsThis work is supported by the National Natural Science Foundation of China (No.60402024).References1. Burt, P.J., Kolczynski, R.J.: Enhanced Image Capture through Fusion. In Proc. of the 4thInter. Conf. on Computer Vision, Berlin (1993) 173-1822. Burt, P.J., Andelson, E.H.: The Laplacian Pyramid as a Compact Image Code. IEEE Trans.Comm., 31 (1983) 532-5403. Toet, A., Ruyven, L.J., Valeton, J.M.: Merging Thermal and Visual Images by a ContrastPyramid. Optic. Eng., 28 (1989) 789-7924. Matsopoulos, G.K., Marshall, S., Brunt, J.N.H.: Multiresolution Morphological Fusion ofMR and CT Images of the Human Brain. Proc. of IEE: Vision, Image and Signal, 141 (1994) 137-1425. Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor Image Fusion using the Wavelet Trans-form. Graph. Models Image Proc., 57 (1995) 235-2456. Zhang, Z., Blum, R.S.: A Categorization of Multiscale-Decomposition-Based Image FusionSchemes with a Performance Study for a Digital Camera Application. Proc. of the IEEE, 87 (1999) 1315-13257. Eskicioglu, A.M., Fisher, P.S.: Image Quality Measures and Their Performance. IEEE Trans.Comm., 43 (1995) 2959-29658. Shirvaikar, M.V.: An Optimal Measure for Camera Focus and Exposure. 36th IEEE South-eastern Symp. on Sys. Theory, Atlanta (2004) 472-475。
基于可控滤波器和空间频率的图像融合算法郭峰;杨静;史健芳【摘要】A multi-focus image fusion algorithm based on steerable filters and spatial frequency was proposed by researching of the pixel-level image fusion.Steerable filters were used to filter the original image to obtain the oriented analytic image in diffe-rent directions.To obtain the oriented analytic spatial frequency map,the local spatial frequency of each oriented analytic image was calculated and compared.The fusion result image was generated with pixels which were selected form original images ac-cording to the fusion rule.Experimental results show that,compared with the results of other fusion algorithms,the proposed algorithm has certain advantages in the subj ective evaluation and obj ective evaluation indicators.%对像素级图像融合进行研究,提出一种基于可控滤波器和空间频率的多聚焦图像融合算法。
利用可控滤波器处理源图像,得到源图像在不同方向上的解析图像,计算并比较方向解析图像的局部空间频率,得到源图像的方向解析空间频率谱,通过融合规则选取源图像中的像素构成融合图像。
基于分块的不可分小波多聚焦图像融合刘维杰1,刘 斌2,彭嘉雄3(1. 武汉大学计算机学院,武汉 430079;2. 湖北大学数学与计算机科学学院,武汉 430062;3. 华中科技大学图像识别与人工智能研究所,武汉 430074)摘 要:为解决现有图像融合方法存在均方根误差较大、熵值和空间频率较小的问题,提出一种基于分块的不可分小波多聚焦图像融合方法。
该方法利用不可分小波滤波器组对原图像进行多尺度分解,选取特征较明显(方差大)的块作为融合子图像的组成块,对融合块图像做不可分小波逆变换后形成融合图像。
实验结果表明,相比其他融合方法,该方法能消除块痕迹、节约运算量,具有更好的融合效果。
关键词:多聚焦图像融合;不可分小波;均方根误差Multi-focus Image Fusion of Nonseparable WaveletBased on BlockingLIU Wei-jie 1, LIU Bin 2, PENG Jia-xiong 3(1. School of Computer, Wuhan University, Wuhan 430079, China; 2. School of Mathematics and Computer Science,Hubei University, Wuhan 430062, China; 3. Institute of Image Recognition and Artificial intelligence,Huazhong University of Science and Technology, Wuhan 430074, China)【Abstract 】In order to solve the problems that the existing image fusion methods have bigger Root Mean Square Error(RMSE) and smaller entropy and spatial frequency, this paper presents a multi-focus image fusion of nonseparable wavelet based on blocking. The sources images are decomposed using the nonseparable wavelet the filter bank. Then the subimages are segmented into blocks, and these blocks are fused by selecting the bigger value of the variance of the block. The inverse wavelet transform is carried out to produce the fused image. Experimental result shows that the fusion performance of the method is better than the other fusion methods. It can eliminate the blocking artifacts of the fused images and save the time of fusion.【Key words 】multi-focus image fusion; nonseparable wavelet; Root Mean Square Error(RMSE) DOI: 10.3969/j.issn.1000-3428.2011.02.071计 算 机 工 程 Computer Engineering 第37卷 第2期V ol.37 No.2 2011年1月January 2011·图形图像处理· 文章编号:1000—3428(2011)02—0205—02文献标识码:A中图分类号:TP3911 概述近年来,图像融合在军事领域和非军事领域如遥感图像、医学图像、机器视觉上得到了广泛应用[1]。
医疗影像技术中的新技术与新方法近年来,医学与技术的融合日趋紧密,医疗影像技术也正日新月异。
随着新技术的不断涌现和更新,医疗影像技术的应用范围也越来越广泛,为医学诊断与治疗带来了无限的可能性。
本文将从各个角度出发,探讨医疗影像技术中的新技术与新方法,以期读者对此有所收获。
一、计算机辅助诊断技术(Computer-Aided Diagnosis, CAD)计算机辅助诊断技术是一种以计算机为核心,通过数字图像处理、特征提取、分类诊断等方法来辅助医师完成疾病的诊断和鉴别诊断的新技术。
随着计算机技术的不断发展,CAD在肺部结节、乳腺癌、肝癌、脑部疾病、心血管疾病等多个领域显示了一定的优势。
例如,基于CAD技术的乳腺三维造影,可以获得大量高分辨率的乳腺图像,并运用计算机算法来寻找信号异常,为人工判读提供一系列的处理手段,从而提高乳腺癌早期诊断的准确率和可信度。
二、医疗影像加强技术(Medical Image Enhancement)医疗影像加强技术是指运用各种图像处理算法对原始影像进行处理以提高影像质量的一系列技术。
该技术可以通过消除影像中的噪声、增强图像对比度、改善图像分辨率等手段来提高图像质量,并在医学诊断、手术规划等方面发挥重要作用。
目前,医疗影像加强技术在核磁共振影像(MRI)、计算机断层扫描(CT)、放射性核素影像等多个医疗领域得到了广泛应用。
例如,针对肺部 CT 影像,借助医疗影像加强技术,可以获得更加清晰的肺组织影像,有利于医生对肺部疾病进行更加精准的诊断。
三、多模态医疗影像融合技术(Multi-Modal Medical Image Fusion)多模态医疗影像融合技术,是指将不同成像模态得到的医疗影像进行融合,从而得到更全面、准确的医疗影像信息的一种新技术。
通过对不同成像技术的综合应用,该技术可以获得更多的医疗影像信息,将不同模态的影像融合在一起,获得更加完整、准确、可靠的影像信息,提高医疗诊断的准确性和可信度。
shearlet变换和区域特性相结合的图像融合郑伟;孙雪青;李哲【摘要】为了提高多模医学图像或多聚焦图像的融合性能,结合shearlet变换能够捕捉图像细节信息的性质,提出了一种基于shearlet变换的图像融合算法。
首先,用shearlet变换将已精确配准的两幅原始图像分解,得到低频子带系数和不同尺度不同方向的高频子带系数。
低频子带系数使用改进的加权融合算法,用平均梯度来计算加权参量,以此来改善融合图像轮廓模糊度高的问题,高频子带系数采用区域方差和区域能量相结合的融合规则,以得到丰富的细节信息。
最后,进行shearlet逆变换得到融合图像。
结果表明,此算法在主观视觉效果和客观评价指标上优于其它融合算法。
%In order to improve the performance of multi-modality medical image fusion and multi-focus image fusion, since the shearlet transform can capture the detail information of images, an image fusion algorithm based on shearlet transform was proposed.Firstly, the shearlet transform was used to decompose the two registered original images, thus the low frequency sub-band coefficients and high frequency sub-band coefficients of different scales and directions were obtained.The fusion principle of low frequency sub-band coefficients was based on the method of weighted fusion, using the average gradient to calculate the weighted parameters in order to improve the edge fuzzy of the fused image.As for the high frequency sub-band coefficients, a fusion rule adopting the region variance combining with the region energy to get the detail information was presented.Finally, the fused image was reconstructed by inverse shearlet transform.The results show that thealgorithm is superior to other fusion algorithms on subjective visual effect and objective evaluation.【期刊名称】《激光技术》【年(卷),期】2015(000)001【总页数】7页(P50-56)【关键词】图像处理;图像融合;shearlet变换;加权融合;区域方差;区域能量【作者】郑伟;孙雪青;李哲【作者单位】河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002;河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002;河北大学电子信息工程学院,保定071002; 河北大学河北省数字医疗工程重点实验室,保定071002【正文语种】中文【中图分类】TP391E-mail:*********************图像融合是将两幅或两幅以上的图像合成一幅新图像的过程,但这并不意味着仅仅是把多幅图像相加,而是要使用特定的规则把每幅图像的有用信息综合起来,以获取一幅更加准确、更加全面的新图像。
I.J. Image, Graphics and Signal Processing, 2013, 12, 56-61Published Online October 2013 in MECS (/)DOI: 10.5815/ijigsp.2013.12.08Correcting Multi-focus Images via SimpleStandard Deviation for Image FusionFiras A. JassimManagement Information Systems Department, Irbid National University, Irbid 2600, Jordanfirasajil@Abstract— Image fusion is one of the recent trends in image registration which is an essential field of image processing. The basic principle of this paper is to fuse multi-focus images using simple statistical standard deviation. Firstly, the simple standard deviation for the k×k window inside each of the multi-focus images was computed. The contribution in this paper came from the idea that the focused part inside an image had high details rather than the unfocused part. Hence, the dispersion between pixels inside the focused part is higher than the dispersion inside the unfocused part. Secondly, a simple comparison between the standard deviation for each k×k window in the multi-focus images could be computed. The highest standard deviation between all the computed standard deviations for the multi-focus images could be treated as the optimal that is to be placed in the fused image. The experimental visual results show that the proposed method produces very satisfactory results in spite of its simplicity.Index Terms— Image fusion, Multi-focus, Multi-sensor, Standard deviation.I.INTRODUCTIONImage fusion is the process of integrating two or more images to construct one fused image which is highly informative. In accordance with the development of technology and inventing various image-capturing devices, it is not always possible to get a detailed image that contains all the required information. During the past few years, image fusion gets greater importance for many applications especially in remote sensing, computer vision, medical imaging, military applications, and microscopic imaging [27]. The multi-sensor data in the field of remote sensing, medical imaging and machine vision may have multiple images of the same scene providing different information. In machine vision, due to the limited depth-of-focus of optical lenses in charge-coupled device (CCD), it is not possible to have a single image that contains all the information of objects in the image. Sometimes, a complete picture may not be always feasible since optical lenses of imaging sensor especially with long focal lengths, only have a limited depth of field [9]. The basic objective image fusion is to extract the required information of an image that was captured from various sources and sensors. After that, fuse these images into single fused (composite) enhanced image [14]. In fact, the images that are captured by usual camera sensors have uneven characteristics and outfit different information. Therefore, the essential usefulness of image fusion is its important ability to realize features that can not be realized with traditional type of sensors which leads to enhancing human visual perception.In fact, the fused image contains greater data content for the scene than any one of the individual image sources alone [7]. Image fusion helps to get an image with all the information. The fusion process must preserve all relevant information in the fused image.This paper discusses the fusion of multi-focus images using the simple statistical standard deviation in k×k window size. Section II gives a review of the recent work and produces variety of different approaches reached. The proposed technique for fusing images had been presented in section III. The experimental results and performance assessments are discussed in section IV. At the end, the main conclusions of this paper have been discussed in section V.II.IMA GE FUSION PRELIMINA RIESThe essential seed of image fusion goes back to the fifties and sixties of the last century. It was the first time to search for practical methods of compositing images from different sensors to construct a composite image which could be used to better coincide natural images. Image fusion filed consists of many terms such as merging, combination, integration, etc [26]. Nowadays, there are two approaches for image fusion, namely spatial fusion and transform fusion. Currently, pixel based methods are the most used technique of image fusion where a composite image has to be synthesized from several input images [1][21]. A new multi-focus image fusion algorithm, which is on the basis of the ratio of blurred and original image intensities, was proposed by [19]. A generic categorization is to consider a process at signal, pixel, or feature and symbolic levels [15]. An application of artificial neural networks to solve multi-focus image fusion problem was discussed by [14]. Moreover, the implementation of graph cuts into image fusion to support was proposed by [16]. According to [23], a compressive sensing image fusion had been discussed.A wavelet-based image fusion tutorial must not beforgotten and could be found in [18]. A complex wavelets and its application to image fusion can be found in [11]. Image fusion using principle components analysis (PCA) in Compressive sampling domain could be found [28]. A new Intensity-Hue-Saturation (HIS) technique to Image Fusion with a Tradeoff Parameter was discussed by [3]. An implementation of Ripplet transform into medical images was discussed by [6]. An excellent comparative analysis of image fusion methods can be found in [26]. A comparison of different image fusion techniques for individual tree crown identification using quickbird images [20].Here, in this paper, in order to compare the proposed technique with some common techniques in image fusion, therefore; we have used the wavelet and principle components analysi s (PCA) as the most two widely implemented methods in image fusion [28]. The wavelet transform has become a very useful tool for image fusion. It has been found that wavelet-based fusion techniques outperform the standard fusion techniques in spatial and spectral quality, especially in minimizing color distortion. Schemes that combine the standard methods (HIS or PCA) with wavelet transforms produce superior results than either standard methods or simple wavelet-based methods alone. However, the tradeoff is higher complexity and cost [18]. On the other hand, principal component analysis is a statistical analysis for dimension reduction. It basically projects data from its original space to its eigenspace to increase the variance and reduce the covariance by retaining the components corresponding to the largest eigenvalues and discarding other components. PCA helps to reduce redundant information and highlight the components with biggest influence [26]. Furthermore, many tinny and big modifications on the known wavelet method were researched and discussed by many authors. On of these contributions the one that was introduced for image fusion named wavelet packet by [2]. Another contribution on wavelet called Dual Tree Complex Wavelet Transform (DT-CWT) can be found in [10][12]. Also, a curvelet image fusion was presented by [4][17]. Furthermore, many other fusion methods like contourlet by [8][22], and Non-subsampled Contourlet Transform (NSCT) which can be found in [5].III.PORPOSED TECHNIQUEThe proposed technique in this paper was based on the implementation of the simple standard deviation for each of the fused and input images. At the beginning, a brief description about statistical standard deviation must be introduced. Actually, the standard deviation is a measure of how spreads out numbers are. Moreover, it is the measure of the dispersion of a set of data from its mean. The more spread apart the data, the higher the deviation. The standard deviation has proven to be an extremely useful measure of spread in part because it is mathematically tractable. In this paper, the essential contribution that was used as a novel image fusion technique was based on the fact that the standard deviation in the focused part inside an image had high details rather than its identical part in the unfocused image. Hence, the dispersion between pixels inside the focused part is higher than the dispersion inside the unfocused part. Hence, the standard deviation could be computed for each k×k window inside all the input multi-focus images. The higher value of the standard deviation between all the computed standard deviations in all the input images may be treated as the optimal standard deviation. According to Fig. 1, the standard deviation for an arbitrary 2×2 window size was computed to show that the higher standard deviation value in the four input images is the image that has more details. Statistically speaking, the computed standard deviations for the four input images were 0.5, 1.7078, 3.304, and 2.63, respectively; for figures (1.a), (1.b), (1.c), and (1.d). Therefore, the higher value of the standard deviation is (3.304) which belong to the third image (Fig. 1.c) which has sharp edges rather than the other images with blurred or semi-blurred edges. Consequently, the k×k window with the high standard deviation value is recommended to be the optimal window and that is to be placed in the fused image from all the input images. However, the suitable value of the window size k is preferred to be as small as possible. Therefore, in this article, the best recommended k that is to be gives best results is 2.124 123132 130Figure 1. Four multi-focus images (a), (b), (c) and (d), eachwith its own standard deviationFigure 2. Blurred Block from Clock (Blur)188 188 186 185 186 184 182 180 179 178 188 187 186 185 184 183 182 182 181 179 187 188 187 186 184 183 182 182 181 179 188 188 187 187 186 184 182 180 179 178 188 187 187 186 185 184 181 179 176 175 187 186 185 184 182 182 180 178 174 172 186 185 183 182 181 180 178 176 173 171 186 185 183 181 181 180 177 174 172 170 184 183 182 181 179 177 174 173 171 168 180 180 179 178 177 174 171 170 167 164 Figure 3. Original Block from Clock (Sharp)189 189 188 186 185 183 181 182 183 182 190 190 190 189 186 183 181 182 182 181 188 188 188 188 187 184 182 181 181 181 188 187 187 187 188 186 183 182 182 182 189 188 189 189 187 186 184 182 182 182 188 187 187 188 186 185 184 182 181 181 187 186 185 185 185 185 184 183 182 181 188 187 187 187 185 186 186 184 182 181 186 187 188 188 187 183 182 183 182 181 184 184 185 186 185 181 182 182 179 176 Figure 4. Blurred Block from Clock (Blur)As another example, for the matter of changing the window size, a 10×10 window size was used from clock multi-focus image, Fig. 2. The computed standard deviations for the 10×10 window size were (3.2489) and (1.8803), for the sharp and blurred images (for the small clock), respectively. The 10×10 window size was indicated by wide dark borders. Once again it is clear that the sharp edges image has higher standard deviation than the blurred image. The 10×10 windows sizes for the sharp and the blurred images were presented in figures (3) and (4), respectively.IV.EXPERIMENTA L RESULTSIn this section, experimental results that support the proposed technique in this paper were presented and discussed. Several standard test images were used as input images and the resulted fused images were obtained according to the proposed technique by implementing the application of the highest standard deviation value. Obviously, the ocular results show that the proposed technique produces quite satisfactory results compared with other methods like wavelet transform and principle components analysi s (PCA). Practically, there are many performance measures used in image fusion techniques like Root Mean Square Error (RMSE), Peak Signal-to-Noise ratio (PSNR), Image Quality Index (IQI) [24], and structure similarity index (SSIM) [25][29]. Here, in this paper, the SSIM measure was implemented because of its high confidentiality [13]. TABLE1COMPARISON OF SSIM BETWEEN WAVELET,PCA,AND THE PROPOSEDIn accordance with Table (1), the SSIM values for the proposed technique are less than those for both wavelet and PCA. It seems that the structure similarity index in both wavelet and PCA is higher than the proposed fusion technique. At this point, it must be mentioned that the SSIM shows the similarity between the fused and the input images according to the original structure. Since the blurring size used in this paper for all test images is approximately half of the input images, then SSIM shows the similarity according to both blurred and sharp parts of the input images. Therefore, the smaller value of SSIM for the proposed technique does not mean that it is not good. But it exactly means that the degree of similarity for the fused image constructed by the proposed technique is less than the degree of similarity for both the wavelet and PCA. This contribution can be consolidated by the ocular results in Figures (5) to (8). Actually, the visual results obtained by the proposed technique have more sharpening details than other methods (wavelet and PCA).As another performance measure, the PSNR was used to test the goodness of the constructed image. Obviously, the PSNR measure can not be computed unless the original image must be present. This can not be done when using SSIM because the original image may not be always available because the multi-focused images are obtained in different zooming times and angles for the same scene. In fact, there is no perfect image thatcan give the ability to be compared with. Hence, in this paper, the PSNR can not be evaluated for all the test images. Therefore, three test images have their original images present which are Lena, Peppers (V), and Peppers (D), where V stands for Vertical and D for diagonal blurring direction.TABLE2COMPARISON OF PSNR BETWEEN WAVELET,PCA,AND THE PROPOSEDClearly, according to Table (2), the PSNR values forthe proposed technique are higher (in small absolutedifference) than the PSNR values for both the waveletand PCA for the fused images. Actually, this means thatthe differences between the constructed image using theproposed technique and the original image are less thanthose of the wavelet and PCA. This result supports thecontribution that was stated previously which is that theproposed technique produces quite satisfactory resultsaccording to its similar common fusion methods such aswavelet and PCA.(a) (b)(c) (d)(e)Figure 5 (a) Blurred Image (left) (b) Blurred image (right)(c) Wavelet (d) PCA (e) Proposed(a) (b)(c) (d)(e)Figure 6 (a) Blurred Image (left) (b) Blurred image (right) (c)Wavelet (d) PCA (e) Proposed(a) (b)(c) (d)(e)Figure 7 (a) Blurred Image (left) (b) Blurred image (right)(c) Wavelet (d) PCA (e) Proposed(a)(b)(c)(d)(e)Figure 8 (a) Blurred Image (left) (b) Blurred image (right)(c) Wavelet (d) PCA (e) ProposedV.CONCLUSIONThe main and fundamental conclusion that comesfrom the proposed technique is that its simplicity andperformance compared to other complicated methodsthat produce almost the same results by using highlycomplicated mathematical formulas and time consumingtransformations. The only disadvantage of the proposedtechnique is that the fused image is not always perfect.Hence, there must be a condition for the selectioncriterion for the best standard deviation between severalstandard deviations. This may be solved by future work.Further, the number of input images that was used inthis article is two input images only. But this numbercan be generalized for more than two input images andthat is the reason for the name multi-focused imagefusion technique.The results obtained by the highest value of thestandard deviation technique has proven it’s adequacyand efficiency in constructing a fused image from multi-focus images which were taken by different zoomingprocess and times in the camera sensor. Finally, visualeffect and statistical parameters indicate that theperformance of our new method is better its competitorsof the filed.Additionally, one of the best recommendations thatcan be treated as a future work is the implementation ofthe proposed technique on color images than the grayscale images that were used in this article. Actually, thismay facilitate the way for applying the proposedtechnique into video stream.REFERENCES[1]Ben Hamza A., He Y., Krimc H., and Willsky A.,“A multiscale approach to pixel-level imageFusion”, Integrated Computer-Aided Engineering,Vol. 12, pp. 135–146, 2005.[2]Cao, W.; Li, B.C.; Zhang, Y.A. “A remote sensingimage fusion method based on PCA transform andwavelet packet transform”, InternationalConference of Neural Network and Signal Process,vol. 2, pp. 976–981, 2003.[3]Choi M., “A New Intensity-Hue-Saturation FusionApproach to Image Fusion With a TradeoffParameter”, IEEE Transactions On GeoscienceAnd Remote Sensing, Vol. 44, No. 6, June 2006.[4]Choi M., Kim R. Y., Nam M. R., “Fusion of multi-spectral and panchromatic satellite images usingthe curvelet transform”, IEEE Geosci. RemoteSens. Lett., vol. 2, pp. 136–140, 2005.[5]Cunha A. L., Zhou J. P, Do M. N., “The non-subsampled contourlet transform: Theory, designand applications”, IEEE Trans. Image Process, vol.15, p.. 3089–3101, 2006.[6]Das S., Chowdhury M., and Kundu M. K.,“Medical Image Fusion Based On RippletTransform Type-I”, Progress In ElectromagneticsResearch B, Vol. 30, pp. 355-370, 2011.[7]Delleji T., Zribi M., and Ben Hamida A., “Multi-Source Multi-Sensor Image Fusion Based onBootstrap Approach and SEM A lgorithm”, TheOpen Remote Sensing Journal, Vol. 2, pp. 1-11,2009.[8]Do M. N., Vetterli M., “The contourlet transform:An efficient directional multi-resolution imagerepresentation”, IEEE Trans. Image Process, vol.14, pp. 2091–2106, 2005.[9]Flusser J., Sroubek F., and Zitov B., “ImageFusion: Principles, Methods, and Applications”,Lecture notes, Tutorial EUSIPCO 2007.[10]Hill P. R.; Bull D. R., Canagarajah C.N., “Imagefusion using a new framework for complexwavelet transforms”, Int. Conf. Image Process, vol.2, pp. 1338–1341, 2005.[11]Hill P., Canagarajah N., and Bull D., “Imagefusion using complex wavelets”, Proceedings ofthe 13th British Machine Vision Conference,University of Cardiff, pp. 487-497,2002.[12]Ioannidou S.; Karathanassi V., “Investigation ofthe dual-tree complex and shift-invariant discretewavelet transforms on Quickbird image fusion”,IEEE Geosci. Remote Sens. Lett., vol. 4, pp. 166–170, 2007.[13]Klonus S., Ehlers M., “Performance of evaluationmethods in image fusion”, 12th InternationalConference on Information Fusion Seattle, WA,USA, July 6-9, 2009, pp. 1409 – 1416.[14]Li S., Kwok J. T., and Wang Y., “Multifocusimage fusion using artificial neural networks”,Pattern Recognition Letters, Vol. 23, pp. 985–997,2002.[15]Miles B., Ben Ayed I., Law M. W. K., Garvin G.,Fenster A., and Li S., “Spine Image Fusion viaGraph Cuts”, IEEE tansaction on biomedical engineering, Vol. PP, No. 99, 2013.[16]Nencini F., Garzelli A., Baronti S.,”Remotesensing image fusion using the curvelet transform”,Inf. Fusion, vol. 8, pp. 143–156, 2007.[17]Pajares G., De la Cruz J. M., “A wavelet-basedimage fusion tutorial”, Pattern Recognition, Vol.37, pp. 1855 – 1872, 2004.[18]Qiguang M., Baoshu W., Ziaur R., Robert A. S.”Multi-Focus Image Fusion Using Ratio Of BlurredAnd Original Image Intensities”, Proceedings OfSPIE, The International Society For Optical Engineering. Visual Information Processing XIV :(29-30 March 2005, Orlando, Florida, USA.[19]Luo R., Kay M., Data fusion and sensorintegration: state of the art in 1990s. Data fusionin robotics and machine intelligence, M. Abidi andR. Gonzalez, eds, Academic Press, San Diego,1992.[20]Riyahi R., Kleinn C., Fuchs H., “Comparison ofDifferent Image Fusion Techniques For individualTree Crown Identification Using Quickbird Images”, In proceeding of ISPRS Hannover Workshop 2009, 2009.[21]Rockinger O., Fechner T., “Pixel-Level ImageFusion: The Case of Image Sequences”, InProceedings of SPIE, Vol. 3374, No. 1, pp. 378-388, 1998.[22]Song H., Yu S., Song L., “Fusion of multi-spectraland panchromatic satellite images based on contourlet transform and local average gradient”,Opt. Eng., vol. 46, pp. 1–3, 2007.[23]Wan T., Canagarajah N., and Achim A.,“Compressive Image Fusion”, In the Proceedingsof the IEEE International Conference on ImageProcessing (ICIP), pp. 1308-1311, October, 2008,San Diego, California, USA.[24]Wang Z. and Bovik A. C., “A universal imagequality index”, IEEE Signal Processing Letters,Vol. 9, No. 3, pp. 81–84, 2002.[25]Wang Z., Bovik A. C. , “Image QualityAssessment: From Error Visibility to StructuralSimilarity”, IEEE Transactions on Image Processing, Vol. 13, No. 4, pp. 600-612, 2004. [26]Wang Z., Ziou D., Armenakis C., Li D., and Li Q.,“A Comparative Analysis of Image Fusion Methods”, IEEE Transactions On Geoscience AndRemote Sensing, Vol. 43, No. 6, pp. 1391-1402,2005.[27]Yuan J., Shi J., Tai X.-C., Boykov Y., “A Study onConvex Optimization Approaches to Image Fusion”, Lecture Notes in Computer Science, Vol.6667, pp. 122-133, 2012.[28]Zebhi S., Aghabozorgi Sahaf M. R., and SadeghiM. T., “Image Fusion Using PCA In CS Domain”,Signal and Image Processing: An InternationalJournal, Vol.3, No.4, 2012.[29]Zhang L., Zhang L., Mou X. and Zhang D., “FSIM:A Feature SIMilarity Index for Image QualityAssessment”, IEEE Transactions on Image Processing, Vol. 20, No. 8, pp. 2378-2386, 2011. Firas A. Jassim, male, was born in Baghdad, Iraq, in 1974. He received the B.S. and M.S. degrees in Applied Mathematics and Computer Applications from Al-Nahrain University, Baghdad, Iraq, in 1997 and 1999, respectively, and the Ph.D. degree in Computer Information Systems (CIS) from the Arab University for Banking and Financial Sciences, Amman, Jordan, in 2012. In 2012, he joined the faculty of business administration, department of management information systems, Irbid National University, Irbid, Jordan, where he is currently an assistance professor. His current research interests are image compression, image interpolation, image segmentation, image enhancement, image fusion and simulation.。
穿透毛玻璃的可见光成像系统李成勇;应春霞;胡晶晶【摘要】透过复杂介质获取目标物体图像精细信息的能力是光电图像采集处理的一大难点,选用CMOS光电图像传感器,设计了CMOS成像系统以及后端读取和处理电路,透过毛玻璃对目标物体成像,将采集的图像信息传送到计算机中进行处理.该系统按照相机光学成像系统原理制作,采用通用CMOS图像传感器芯片完成电路设计,加之红外激光辅助照明拍摄采集图像,由远及近不同距离分别对同一目标物成像,对成像图像进行迭代图像增强算法优化,可以解决毛玻璃非匀质问题,使光源重建精度大大提高,得到的可见光图像轮廓清晰,与一般CCD成像系统相比,识别率超过95%,远大于一般成像系统,且成像性能良好.【期刊名称】《应用光学》【年(卷),期】2019(040)003【总页数】6页(P416-421)【关键词】可见光;CMOS图像传感器;光电成像;图像处理【作者】李成勇;应春霞;胡晶晶【作者单位】重庆工程学院电子信息学院,重庆400056;重庆工程学院电子信息学院,重庆400056;重庆工程学院电子信息学院,重庆400056【正文语种】中文【中图分类】TN29引言毛玻璃也叫雾面玻璃、防眩玻璃等,是用金刚砂等磨过或以化学方法处理过的一种表面粗糙不平整的半透明玻璃,毛玻璃表面不平整,光线通过毛玻璃被反射后向四面八方射出去,因为毛玻璃表面不是光滑的平面,使光产生了漫反射,折射到视网膜上已经是不完整的像,于是就看不见玻璃背后的图像。
毛玻璃对于不同光波段的成像影响差别比较大,常见的成像光波段有可见光、紫外光、红外光。
紫外光波长较短,吸收大,穿透深度短,常用于透射皮肤等物质成像。
毛玻璃因为有水等少量杂质,紫外光相对于可见光在穿透毛玻璃时有着较高的吸收,因此穿透毛玻璃成像选择透射率高的可见光,毛玻璃对可见光而言表面平整,吸收弱,折射率低。
穿透毛玻璃成像技术,是通过光电图像传感器捕捉视频图像,进行图像信息采集,可以获取人类视觉上看不到的图像信息,让更多的潜在信息被捕捉到。
基于多尺度特征融合网络的多聚焦图像融合技术作者:吕晶晶张荣福来源:《光学仪器》2021年第05期摘要:多聚焦圖像融合技术是为了突破传统相机景深的限制,将焦点不同的多幅图像合成一幅全聚焦图像,以获得更加全面的信息。
以往基于空间域和基于变换域的方法,需要手动进行活动水平的测量和融合规则的设计,较为复杂。
所提出的方法与传统的神经网络相比增加了提取浅层特征信息的部分,提高了分类准确率。
将源图像输入训练好的多尺度特征网络中获得初始焦点图,然后对焦点图进行后处理,最后使用逐像素加权平均规则获得全聚焦融合图像。
实验结果表明,本文方法融合而成的全聚焦图像清晰度高,保有细节丰富且失真度小,主、客观评价结果均优于其他方法。
关键词:多聚焦图像融合;卷积神经网络;多尺度特征中图分类号:TP 183文献标志码:AMulti-focus image fusion technology based on multi-scale feature fusion networkLU Jingjing,ZHANG Rongfu(School of Optoelectronic Information and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China)Abstract:The multi-focus image fusion technology is to break through the limitation of the traditional camera's depth of field and combine multiple images with different focal points into a full-focus image to obtain more comprehensive information. In the past,methods based on the spatial domain and the transform domain required manual activity level measurement and fusion rule design,which was more complicated. Compared with the traditional neural network,the method proposed in this paper increases the part of extracting shallow feature information and improves the classification accuracy. The source image is input into the trained multi-scale feature network to obtain the initial focus map. Then,the focus map is post-processed. Finally ,the pixel- by-pixel weighted average rule is used to obtain the all-focus fusion image. The experimental results show that the all-focus image fused by the method in this paper has high definition,richdetails,and low distortion,the subjective and objective evaluation results are better than those of other methods for comparison.Keywords:multi-focus image fusion;convolutional neural network;multi-scale features 引言图像融合技术是指将多张图像中的重要信息组合到一张图像中,比单一源图像具有更丰富的细节[1]。
A New Multi-focus Image Fusion Method Using PrincipalComponent Analysis in Shearlet DomainBiswajit BiswasDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,India biswajit.cu.08@Ritamshirsa ChoudhuriDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,Indiaritam.shirsa@Kashi Nath DeyDepartment of ComputerScience and EngineeringUniversity Of Calcutta,Kolkata,Indiakndey55@ Amlan ChakrabartiAKCSITUniversity Of Calcutta,Kolkata,Indiaacakcs@caluniv.ac.inABSTRACTThe multi-focus image fusion used to produce a single im-age where the entire view is focused by combining multi-ple images taken with different focus distances.Here we present a concept for multi-focus image fusion using Princi-pal component analysis(PCA)method on shearlet domain. Our proposed concept works on two folds,i)transform the source image into shearlet-image by using shearlet transform (ST),ii)use of PCA model in low-pass sub-band by which the best pixels in smooth parts are selected according to their arrangement.The composition of different high-pass sub-band coefficients achieved by the ST decomposition are realized.Then,the resultant fusion image is reconstructed by performing the inverse shearlet transform(IST).The ex-perimental results,show that our proposed technique can tender enhanced fusion results than some existing methods in the state-of-the-art.This comparative assessment done in the lights of qualitative and quantitative measurements in terms of mutual information and fusion matrix Q AB/F. Categories and Subject Descriptors1.4[Computing Methodologies]:Image Processing—Im-age FusionKeywordsMulti-focus image fusion,shearlet transform,Principal com-ponent analysis,mutual information1.INTRODUCTIONImage fusion is the process of combining information from Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.Request permissions from Permissions@. PerMin’15February26-27,2015,Kolkata,West Bengal,India Copyright2015ACM978-1-4503-2002-3/15/02...$15.00/10.1145/2708463.2709064.two or more images of a scene into a single composite im-age that is more informative and is more suitable for visual perception or computer processing[1].This technology has been widely used in the different areas such as military,re-mote sensing,medicine etc[2].Now-a-days,many image fusion techniques have been proposed to fuse multi-focus images[8,9,11,12].In general,these fusion techniques can be categorized into two classes:i)spatial domain methods ii)transform domain methods[1,2].In spatial domain methods,images can directly fuse inten-sity values of pixels from the input images.The simplest fu-sion method of this class is to compute the average of all the input images,i.e.‘Average scheme’,but average scheme is often suffers sharpness and contrast degradation problem[3]. An improvement over this baseline method is to use for the sharper pixels at each pixel location over all the input im-ages to compose the fused image.This method requires an evaluation metric,such as variance[2],energy of image gra-dient[3],energy of Laplacian[3,5],to establish what amount of a pixel be fused.However,these pixel-based methods e.g.Principle component analysis(PCA),Intensity-Hue-Saturation(IHS),etc.,the presence of noise can cause in-accurate measurement of sharpness and degrade fusion per-formance,while sharpness is evaluated locally around each pixel[3].To resolve the noise issue,block-based methods have been proposed[8].For block based methods,the input images arefirst divided into blocks/regions,the fused image is then composed by selecting the sharper blocks/regions from the input images[12].These block-based methods may suffer that the artifacts can be appear at the boundary of blocks which greatly reduce the quality of the fused image. These limitations of spatial domain methods handled on transform domain methods.The Wavelet transform,Lapla-cian pyramid or Curvelet transform,are pioneering methods of transform domain.These methods produce better results than any spatial domain methods.In recent years,wavelet transform for time-frequency localization,multi-scale char-acteristic and spare representation of target function with point singularity is widely used in image processing and achieved good effect[6,7,10].However,for images con-tained higher dimension singularity,wavelet transform can-not achieve the optimal spare approach[13,14].Thus,toresolve the such limitation,wavelet transform represents im-ages,multi-scale geometric analysis theory is developed and proposed a series of new multi-scale geometric transform method,such as,ridgelet [13],curvelet [13],contourlet [14].Presently,many author have developed fundamental theory of shearlet transform over the affine system that is provided the complete characteristic of analysis and synthesis.The principal component analysis (PCA)is a popular method for feature extraction and dimension reduction and is employed for image fusion [2].Usually,principal compo-nent analysis is a mathematical that transform a number of correlated variables into a several uncorrelated variables.PCA is widely used in data classification [4].We propose image fusion algorithm by combining shearlet transform and PCA techniques and carry out the quality analysis of pro-posed fusion algorithm on set of benchmark Multi-focus im-age.The fusion using PCA is achieved by weighted sum of source images.The weights for each source image are ob-tained from the normalized Eigen vector of the covariance matrices of each source image.Experimental results show that the proposed algorithm provides better results that help to facilitate more accurate analysis of multi-focus images.Our contributions are as follows:•Multi-focus image fusion algorithm using PCA in shear-let domain have been proposed.•In quality analysis,proposed fusion algorithm com-pared with five popular fusion schemes and illustrated that proposed approach more efficient for Multi-focus images.The rest of the paper is organized as follows.in section 2,Shearlet transform have been discussed in brief and Our proposed method is presented in Section 3and the exper-imental results are illustrated in Section 4.In section 5,furnished the conclusion of this paper.2.SHEARLET TRANSFORMConventionally,the shearlet transform was developed based on an affine system with composite dilations [15].The shift-invariant shearlet transform primarily consists of two steps:multi-scale decomposition and directional localization.To save space,we like better to refer the readers to the litera-tures [15,16]for more details.Let us briefly described the continuous and discrete shearlet transform at fixed resolu-tion level j on later subsections:2.1Continuous Shearlet SystemsIn dimension n =2,the affine systems with composite dilations are the collections of the form:ΨP Q (ψ)={ψj,k,l (X )(1)=|det P |j/2ψQ l P jX −k:j,l ∈Z ,k ∈Z 2}where,P,Q are 2×2invertible matrices,f ∈L 2 R 2,conventionally defined as follows:j,k,l| f,ψj,k,l |2=||f ||2(2)The elements of this method are called composite wavelets if ΨP Q (ψ)forms a Parseval frame or tight frame for f ∈L 2 R 2 ,In this system,the dilations matrices P j are relatedwith scale transformations,while the matrices Q l are related with area-preserving geometric transformations,such as ro-tations and shear.The definition of continuous shearlet transform,usually we have,ψa,s,k (x )=a −3/4ψ U −1s V −1a (x −k )(3)Where V a =a 0√a ,U s = 1001 ,ψ∈L 2 R 2 ,following conditions :1.ˆψ( )=ˆψ( 1, 2)=ˆψ1( 1)ˆψ2( 2/ 1);2.ˆψ1∈C ∞(R )supp ψ1⊂[−2,−1/2] [1/2,2],where,ψ1is continuous wavelet;3.ˆψ2∈C ∞(R )supp ψ2⊂[−1,1],ˆψ2>0But ψ2 =1For ψa,s,k ,a ∈R +,s ∈R +and k ∈R +,for any f ∈L 2(R 2),is called shearlet [15],a collection of wavelets with different scales.Here,the anisotropic expansion matrix U s is associated with the scale transform and the shear matrix V a denotes the geometric transformation.Generally,a =4and s =1.Where a,s,k are denote with scale transformations,the shear direction and the translation vector,respectively.2.2The Discrete Shearlet TransformThe process of the discrete shear transformation can be divided into two steps:i)multi-scale subdivision ii)direction localization [15].The figure 1illustrates the decomposition process with the shearlets and the figure 4illustrate shearlet transform of an image (‘Book’).In this process,at any scale j,let f ∈L (Z 2N ).Firstly the Laplacian pyramid method is used to decompose a im-age f j −1a into low-pass image f ja and a high-pass image f j hwith N j =N j −1/4,where f j −1a ∈L (Z 2N j −1),f j a ∈L (Z 2N j )and f j h ∈L (Z 2N j −1).After decomposition,estimated ˆf j b on a pseudo-polar grid with the sub-sequent one-dimensional band pass filter with respect to the signal components,thatgenerate a special matrix D ˆf j b.Then used a band-pass filter on matrix D ˆf j b to reconstruct the Cartesian sampled val-ues and finally performed the inverse two-dimensional FastFourier Transform (FFT)for reconstruct the image.Figure 2illustrate the structure of the frequency tiling provide by the shearlet transform [15].3.THE PROPOSED FUSION ALGORITHMIn this paper,we develop a novel multi-focus image fusionscheme to share the merits of ST and PCA technique.For simplicity,we name this fusion algorithm as ST-PCA fusion algorithm.The main steps of the proposed ST-PCA fusion algorithm is shown in Figure.3.The proposed fusion scheme (ST-PCA)consists two pro-cessing parts:The low frequency coefficients of the original image manipulated by using PCA and integrated them with average method.The high frequency coefficients updated by using an adaptive parameter that is derived from high-pass sub-bands of same and different levels.To summarize the necessary process are described as follows:3.1PCA transform fusion approachPrincipal component analysis is transforms a number of correlated variables into a several uncorrelated variables [5].Figure 1:The figure shows the order of decomposition and directionalfiltering.(a)(b)Figure 2:The structure of the frequency tiling by the shear-let:(a)The tiling of the frequency plane R 2induced by the shearlet,(b)The size of the frequency support of a shearlet ψj,l,k..Figure 3:Schematic diagram of ST-PCA-based fusion algo-rithm.Mostly,the image fusion using PCA is achieved by weighted factor of source images.The weights for each source im-age are obtained from the normalized Eigen vector of the covariance matrices of each source image [2,5].The concept and method of this way is explained in detail in references [2,5],the basics of PCA fusion as follows:•First,the Multi-focus image is transformed with PCA technique and the Eigen values and correspondingEigen-(a)Originalimage (b)Shearletcoefficients(c)Shearlet (d)Re-constructedFigure 4:Illustration of Shearlet transform on Benchmark image Book:(a)Original ‘Book’image,(b)Shearlet coeffi-cients,(c)Shearlet,and (d)Re-constructed ‘Book’image.vectors of correlation matrix between the Multi-focus source images.However,the individual bands is prin-ciple components of derived matrix [4].•Second,the Multi-focus images are matched by the estimated principle components and using as weighted average for individual bands.•Finally,the first band of the first Multi-focus image is multiply with the first principle component and first band of the second Multi-focus image is multiply with the second principle component and finally,for integra-tion of low-pass sub-bands inverse shearlet transform have been done.3.2Evaluation of low frequency sub-band co-efficientsConventionally,the shearlet low-pass sub-bands efficiently represents the approximation of an original image [15].In this paper,the section scheme based on threshold has been used to produce the best fused low-pass coefficients.As,local image features have a highly related with the ST co-efficients,and its corresponding neighborhoods [15].Also,the threshold is determined from the ST coefficients,and its neighborhood.Furthermore,the local disparity reveals of shearlet coefficients is mostly related with the clarity of fused image.Thus,to obtain better fused result,here prin-cipal ST coefficients are selected and combined by principal component analysis (PCA)method.We can summarize the overall procedure as follows:Let L g (i,j )denote the low-pass sub-band located at (i,j ),g =X,Y .The fused low-pass coefficients L F (i,j )are achievedby the following calculation:L F (i,j )=αL X (i,j )+βL Y (i,j )(4)where α,β,α+β=1are two parameters,which are deter-mine via PCA method as follows:1.Initially,we estimate covariance matrix C (i,j )from L g (i,j )and calculate Eigen value V and vector D from matrix C (i,j )successively [4].ter,we have [V,D ]=eigen(C (i,j )),thus if D (i,i )≥D (i +1,i +1),then normalized the i th column of eigen value matrix V otherwise normalized the (i +1)th col-umn of eigen value matrix V .For i =1,we can write as V (:,i )/ n i =1V (:,i )and V (:,i +1)/ ni =1V (:,i +1)respectively.3.Finally,first element of normalized Eigen vale matrix V represents by αand second value by β.that are α=V (:,i )/n i =1V (:,i )and β=V (:,i +1)/ n i =1V (:,i +1)respectively.3.3Evaluation of high frequency sub-band co-efficientsHigh-pass sub-bands of ST provides the details informa-tion of the image,such as edge,corner and etc.[16].To enhance the performance of image fusion method,a new de-cision mapping scheme has been proposed and integrated with shearlets high-pass sub-bands.Let H l,kg (i,j )be the high-pass coefficient at the location (i,j )in the l th sub-band at the k th level,g =X,Y .In short,the process of high-pass shearlet sub-bands by proposed fu-sion method ST-PCA is as follows:For the current sub-band H l,kg of the horizontal levels,letR g,h is the total of H l,kg and the other horizontal sub-bands H m,k g in same level k .It is computed by:R g,h=K k =1L l =1| H l −1,k g −H l,kg |(5)where |·|is absolute distance between two consecutive levels.Similarly,for vertical level,let R g,v is the sum of H l,kgand all the other vertical high-pass sub-bands H m,ng in the different high-pass levels.We evaluated as follows:R g,v =K l,m =1L k,n =1|H l,kg−H m,n g|(6)To determine the horizontal ζg,h and the vertical ζg,v high-pass sub-band dependency factor for the present high-passshearlet sub-band H l,kg ,we performed:ζg,h =R g,hG,h g,v (7)ζg,v =R g,v(R g,h +R g,v )(8)The parameter ζg,h is a relationship between H l,kg and other neighbor high-pass sub-bands in the same horizontal plane,and ζg,v relationship between H l,kg in the different vertical plane with its corresponding neighbor sub-bands.Table 1:Statistical result analysis of ST-PCA Performance metric‘Book ’‘Clock’‘Pepsi ’MI 8.2478.0358.011QAB/F 0.7030.7310.771Finally,to derived the new coefficients H l,kg,new from oldshearlet high-pass sub-bands H l,kg by the factors ζg,h and ζg,v ,we evaluated as follow:H l,k g,new =H l,k g ×1+ζ2g,h +ζ2g,v (9)However,the fused coefficients H l,kF (i,j )are obtained by the following estimation:H F (i,j )=H l,k X,new (i,j )if H l,k X,new ≥H l,k Y,newH l,kY,new (i,j )otherwise(10)The procedure of the ST-PCA fusion algorithm is sum-marized as :•The images to be fused must be registered.•These source images are decomposed by ST transformed.•Best low-pass sub-band are estimated by PCA usingEq.4and the final fused image are obtained via IST.•Largest high-pass shearlet coefficients are selected by Eqs.9,10and fused by proposed fusion rules.•Largest directional shearlet sub-bands is selected.•The fused image is reconstructed by IST.4.EXPERIMENTAL ANALYSISIn this section,we assessed the performance of the pro-posed multi-focus image fusion technique (ST-PCA).The proposed ST-PCA and other selected fusion algorithms are analyzed on a set of benchmark image.Here,three of them are presented to demonstrate the performance of the algo-rithm and two of them are shown the comparative analysis.The benchmark image sets are available at as shown in figure.5.In our experiments,we compared the performance of proposed ST-PCA method with five differ-ent fusion schemes such as (1)Principal component analysis (PCA)[2,4](2)Laplacian pyramid technique (LPT)[5](3)Discrete wavelet transform (DWT)[6],(4)Curvelet trans-form (CVT)[13]and (5)Non-sub-sampled counterlet trans-form (NCST)[14]respectively.Our experimental platform is MATLAB R2010b in the PC Intel Quad-Core i33.2GHz CPU and 8GB memory.Here we have selected two most popular fusion metrics such as Mutual Information (MI)[17]and Q AB/F [18],to evaluate the performance of multi-focus fusion.The MI determines how much information is achieved from the in-put images.This is accomplished by accumulating the mu-tual information of the fused image with each of the input images [17].On the other hand,the performance metric,Q AB/F [18],is based on the assumption that the perceptu-ally important information of an image is the high-frequency edge details.This metric compute how much edge informa-tion is transferred from the input images to the fused image.Table2:Statistical results of comparative assessments of various fusion schemesFusion Method Performance metric‘Clock’‘Pepsi’PCA MI7.1277.301Q AB/F0.7080.719 LPT MI7.4187.325Q AB/F0.7120.722 DWT MI7.5077.480Q AB/F0.7190.732 CVT MI7.6847.541Q AB/F0.7200.740 NSCT MI7.9807.897Q AB/F0.7240.759 ST-PCA MI8.0358.011Q AB/F0.7310.771 The experimental results demonstrate that the focus in the source images are properly fused in thefinal images. Figure1show three fusion results andfigure2showfive fused images obtained by different fusion algorithms on the ‘Pepsi’and‘Clock’image sets.The results of MI and Q AB/F are shown in Table1and Table2.Note that the highest score in each row of Tables2is displayed in bold.From Table1and Table2,it can be observed that the ST-PCA method consistently outperforms the otherfive methods in both evaluation metrics on images setfigure5.In addition to the objective evaluation,we also performed a visual com-parison on image set Pepsi as shown infigure6,and Clock as shown infigure6,.Infigures.??,6are illustrated the re-sultant fused images obtained from the PCA[2,5],LPT[5], DWT[6],CT[13],NCST[14],and ST-PCA respectively. From Tables1and2,we observed that the ST-PCA is bet-ter to the otherfive ones in terms of all the performance metrics,as a result the ST-PCA superior than others.Fur-thermore,we also observed that the quality score of the proposed algorithm by MI and Q AB/F are much higher than the ones of other fusion methods,the fact is that the method is more efficient to preserves the salient information of im-age.From the above experiments,we concluded that the proposed ST-PCA persistently outperforms the otherfive methods in image quality measures.5.CONCLUSIONThe Shearlet provides better analytical information about directionality,localization,anisotropy and multi-scale than traditional multi-scale analysis.In this paper,we propose a new multi-focus image fusion algorithm based on shift-invariant shearlet and PCA or ST-PCA model.The main advantage of our proposed ST-PCA method is to determined optimal maps of focus depth of image which is able to im-prove the focus detection accuracy in smooth regions.Sup-ported by the experimental results,we can conclude that,our ST-PCA method tender better results than many other ex-isting state-of-the-art techniques.6.REFERENCES [1]Anish,A.and Jebaseeli,T.J.A survey on multi-focusimage fusion methods,put.Eng.Technol.(IJARCET).1(8).2012.[2]Konstantinos,parison of nine fusiontechniques for very high resolution data,Photogramm.Eng.Remote Sens.74(5).2008.647-659.[3]Wang,Z.Ziou,D.Armenakis,C.Li,D.and Li,Q.Acomparative analysis of image fusion methods,IEEETrans.Geosci.Remote Sens.43(6).2005.1391-1402. [4]Sun,J.F.Jiang,Y.J.and Zeng,S.Y.A study of PCAimage fusion techniques on remote sensing,Proc.SPIE-5985.2005.739-744.[5]Burt,P.J.The pyramid as a structure for efficientcomputation,in:A.Rosenfeld(Ed.),MultiresolutionImage Processing and Analysis,Springer-Verlag.1984.6-35.[6]Pajares,G.de la Cruz,J.M.A wavelet-based imagefusion tutorial,Pattern Recognit.37(9).2004.1855-1872.[7]Li,H Manjunath,B.S.and Mitra,S.K.Multisensorimage fusion using the wavelet transform,Graph.Models Image Process.57(3).1995.235-245.[8]Li,S.and Yang,B.Multifocus image fusion usingregion segmentation and spatial frequency,Image Vision Comput.26(7),2008.971-979.[9]Aslantas,V.and Kurban,R.Fusion of multi-focusimages using differential evolution algorithm,ExpertSyst.37(12).2010.8861-8870.[10]Tian,J.and Chen,L.Multi-focus image fusion usingwavelet-domain statistics,in:201017th IEEEInternational Conference on Image Processing(ICIP).2010.1205-1208.[11]Zhao,H.Shang,Z.Tang,Y.Y.and Fang,B.Multi-focus image fusion based on the neighbor distance, Pattern Recognit.46(3).2013.1002-1011.[12]Huang,W.and Jing,Z.Evaluation of focus measuresin multi-focus image fusion,Pattern Recognit.Lett.28(4).2007.493-500.[13]Ali,F.El-Dokany,I.Saad,A.and El-Samie F.Acurvelet transform approach for the fusion of MR and CT images,Journal of Modern Optics.57.2010.273-286.[14]Do,M.and Vetterli,M.The contourlet transform:anefficient directional multiresolution image representation, IEEE Trans.Image Process.104(12).2005.2091-2106. [15]Lim,W.Q.The Discrete Shearlet Transform:A NewDirectional Transform and Compactly SupportedShearlet Frames,IEEE TRANSACTIONS ON IMAGE PROCESSING.19(5).2010.1166-1180.[16]Easley,bate,D.and Colonna,F.Shearlet-Based Total Variation Diffusion for Denoising, Image Processing,IEEE Transactions on.18(2).2009.260-268.[17]Qu,G.Zhang,D.and Yan,rmation measurefor performance of image fusion,Electron.Lett.38(7).2002.313-315.[18]Xydeas,C.and Petrovic V.Objective image fusionperformance measure,Electron.Lett.36(4).2000.308-309.(a)Right focus(b)Left focus(c)ST-PCA(d)Right focus(e)Left focus(f)ST-PCA(g)Right focus(h)Left focus(i)ST-PCAFigure5:Multi-focus image fusion results of ST-PCA-based method:(a,d,g)Right Focus.(b,e,h)Left focus.(c,f,i)ST-PCA.(a)PCA(b)LPT(c)DWT(d)CVT(e)NSCT(f)ST-PCA(g)PCA(h)LPT(i)DWT(j)CVT(k)NSCT(l)ST-PCA。