Xuedian Zhang, Hong Wang, Minshan Jiang, et al. Applications of saliency analysis in focus image fusion[J]. Opto-Electronic Engineering, 2017, 44(4): 435-441. doi: 10.3969/j.issn.1003-501X.2017.04.008
Citation: Xuedian Zhang, Hong Wang, Minshan Jiang, et al. Applications of saliency analysis in focus image fusion[J]. Opto-Electronic Engineering, 2017, 44(4): 435-441. doi: 10.3969/j.issn.1003-501X.2017.04.008

Applications of saliency analysis in focus image fusion

    Fund Project:
More Information
  • In the study of autofocus technology, we propose an image fusion method based on saliency analysis, which can solve the problem of all in focus. First, the focal area in the source image is positioned by the graph-based visual saliency (GBVS) algorithm, and then the watershed and morphological methods are used to obtain the closed area of the saliency graph and the pseudo-focus region is removed. The defocused region is processed by the Shearlet transform, and the SML operator is used to choose the fusion parts. Finally, the precise focused region and the processed defocused region are fused into an all in focus image. Experiments show that the fused image of our method is clear and rich in detail, which has the best visual effect, and improves more than 5% in definition and fusion compared with traditional methods.
  • 加载中
  • [1] Zhang Baohua, Lv Xiaoqi, Pei Haiquan, et al. Multi-focus image fusion algorithm based on focused region extraction[J]. Neurocomputing, 2016, 174: 733-748. doi: 10.1016/j.neucom.2015.09.092

    CrossRef Google Scholar

    [2] 张宝华, 裴海全, 吕晓琪.基于显著性检测和稀疏表示的多聚焦图像融合算法[J].小型微型计算机系统, 2016, 37(7): 1604-1607.

    Google Scholar

    Zhang Baohua, Pei Haiquan, Lv Xiaoqi. Multi-focus image fusion based on saliency detection and sparse representation[J]. Journal of Chinese Computer Systems, 2016, 37(7): 1604-1607.

    Google Scholar

    [3] Yan Xiang, Qin Hanlin, Li Jia, et al. Multi-focus image fusion using a guided-filter-based difference image[J]. Applied Optics, 2016, 55(9): 2230-2239. doi: 10.1364/AO.55.002230

    CrossRef Google Scholar

    [4] Duan Jiangyong, Meng Gaofeng, Xiang Shiming, et al. Multifocus image fusion via focus segmentation and region reconstruction[J]. Neurocomputing, 2014, 140(22): 193-209.

    Google Scholar

    [5] 张立凯. 多焦点图像融合方法的研究[D]. 长春: 吉林大学, 2013: 1-59.

    Google Scholar

    Zhang Likai. The research of more focus image fusion method[D]. Changchun: Jilin University, 2013: 1-59.http://cdmd.cnki.com.cn/Article/CDMD-10183-1013195437.htm

    Google Scholar

    [6] Luo Xiaoyan, Zhang Jun, Dai Qionghai. A regional image fusion based on similarity characteristics[J]. Signal Processing, 2012, 92(5): 1268-1280. doi: 10.1016/j.sigpro.2011.11.021

    CrossRef Google Scholar

    [7] Chai Yi, Li Huafeng, Li Zhaofei. Multifocus image fusion scheme using focused region detection and multiresolution[J]. Optics Communications, 2011, 284(19): 4376-4389. doi: 10.1016/j.optcom.2011.05.046

    CrossRef Google Scholar

    [8] Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259. doi: 10.1109/34.730558

    CrossRef Google Scholar

    [9] Harel J, Koch C, Perona P. Graph-based visual saliency[M]//Schölkopf B, Platt J, Hofmann T. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2006: 545-552.

    Google Scholar

    [10] Behura J, Wapenaar K, Snieder R. Autofocus imaging: image reconstruction based on inverse scattering theory[J]. Geophysics, 2014, 79(3): A19-A26. doi: 10.1190/geo2013-0398.1

    CrossRef Google Scholar

    [11] Zhang Xuedian, Liu Zhaoqing, Jiang Minshan, et al. Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus[J]. Optics Express, 2014, 22(25): 31237-31247. doi: 10.1364/OE.22.031237

    CrossRef Google Scholar

    [12] Li Huafeng, Chai Yi, Li Zhaofei. Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection[J]. Optik-International Journal for Light and Electron Optics, 2013, 124(1): 40-51. doi: 10.1016/j.ijleo.2011.11.088

    CrossRef Google Scholar

    [13] Guo Kanghui, Labate D. Optimally sparse multidimensional representation using shearlets[J]. SIAM Journal on Mathematical Analysis, 2007, 39(1): 298-318. doi: 10.1137/060649781

    CrossRef Google Scholar

    [14] 王飞, 王瑶, 史彩成.采用shearlet变换的多聚焦图像融合[J].计算机工程与应用, 2016, 52(2): 205-208.

    Google Scholar

    Wang Fei, Wang Yao, Shi Caicheng. Multi-focus image fusion using shearlet transform[J]. Computer Engineering and Applications, 2016, 52(2): 205-208.

    Google Scholar

    [15] 郑红, 郑晨, 闫秀生, 等.基于剪切波变换的可见光与红外图像融合算法[J].仪器仪表学报, 2012, 33(7): 1613-1619.

    Google Scholar

    Zheng Hong, Zheng Chen, Yan Xiusheng, et al. Visible and infrared image fusion algorithm based on shearlet transform[J]. Chinese Journal of Scientific Instrument. 2012, 33(7): 1613-1619.

    Google Scholar

    [16] 廖勇, 黄文龙, 尚琳, 等. Shearlet与改进PCNN相结合的图像融合[J].计算机工程与应用, 2014, 50(2): 142-146.

    Google Scholar

    Liao Yong, Huang Wenlong, Shang Lin, et al. Image fusion based on Shearlet and improved PCNN[J]. Computer Engineering and Applications, 2014, 50(2): 142-146.

    Google Scholar

    [17] 李美丽, 李言俊, 王红梅, 等.基于NSCT和PCNN的红外与可见光图像融合方法[J].光电工程, 2010, 37(6): 90-95.

    Google Scholar

    Li Meili, Li Yanjun, Wang Hongmei, et al. Fusion algorithm of infrared and visible images based on NSCT and PCNN[J]. Opto-Electronic Engineering, 2010, 37(6): 90-95.

    Google Scholar

    [18] 洪裕珍, 任国强, 孙健.离焦模糊图像清晰度评价函数的分析与改进[J].光学 精密工程, 2014, 22(12): 3401-3408.

    Google Scholar

    Hong Yuzhen, Ren Guoqiang, Sun Jian. Analysis and improvement on sharpness evaluation function of defocused image[J]. Optics and Precision Engineering, 2014, 22(12): 3401-3408.

    Google Scholar

  • Abstract: In the study of autofocus technology, it is always difficult to obtain the all-in focus images because of optical system's limited focus depth, but a high definition image is very important to scientific research. In the research of autofocus technology, we know that detection of the focused region is the key issue of the multi-focus image fusion algorithm, the blurred boundary of the focused region increases the more difficulty of identifying focused regions accurately. According to these principles, the focused region of the source image should be directly fused as much as possible. We propose an image fusion method based on saliency analysis, which can solve the problem of all-in focus. The saliency analysis simulates the human visual attention mechanism by calculating color, direction, brightness and other characteristic information to get the visual significance of the image. The focus of human eyes usually falls in the region of higher saliency. With the saliency analysis, the saliency maps are obtained by comparing the differences among the components of the characteristics which are used to identify the focused regions of the multi-focus image. The process of our method can be described as follow: First, the focal area in the source image is positioned by the graph-based visual saliency (GBVS) algorithm, and then the watershed and morphological methods are used to obtain the closed area of the saliency region and the pseudo-focus regions are removed. In consideration of the defocused region containing much texture and direction information, the defocused region is processed by the Shearlet transform, and the SML operator is used to choose the fusion parts. Finally, the precisely focused region and the processed defocused region are fused into an all-in focus image. Experiments visually show that the fused image of the proposed method has clear and rich details. Compared with 3 traditional methods in 4 objective evaluation criteria: entropy, QAB/F, MI and SF, NSCT method performs best in the 3 traditional methods, while the entropy of the proposed method is 1% higher than that of NSCT method, QAB/F is about 4% higher, MI and SF are about 2% higher each, which means the proposed method has the clearest image and the best fusion performance. In the broken line graph, the merit of the proposed method is more obvious. The subjective visual effects and objective evaluation of the results demonstrate neatly that the proposed method is an effective image fusion method, and we will do further research on color image fusion.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(4)

Tables(1)

Article Metrics

Article views(7395) PDF downloads(3255) Cited by(0)

Access History
Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint