Guo Zhicheng, Dang Jianwu, Wang Yangping, et al. Background modeling method based on multi-feature fusion[J]. Opto-Electronic Engineering, 2018, 45(12): 180206. doi: 10.12086/oee.2018.180206
Citation: Guo Zhicheng, Dang Jianwu, Wang Yangping, et al. Background modeling method based on multi-feature fusion[J]. Opto-Electronic Engineering, 2018, 45(12): 180206. doi: 10.12086/oee.2018.180206

Background modeling method based on multi-feature fusion

    Fund Project: Supported by Natural Science Foundation of China (61661026, 61162016), the Provincial Education Department of Gansu Province, China (2017D-08), the Natural Science Foundation of Gansu Province, China (1610RJZA039, 144WCGA162), and the Young Scholars Science Foundation of Lanzhou Jiaotong University (2016005)
More Information
  • In order to build a robust background model and improve the accuracy of detection of foreground objects, the temporal correlation of pixels at the same position of the video image and the spatial correlation of neighboring pixels are considered comprehensively. This paper proposed a background modeling method based on multi-feature fusion. By using the domain correlation of pixels in a single frame image to quickly establish an initial background model whichis updated using pixel values, frequency, update time and sensitivity of the video image sequence, the ghost phenomenon is effectively improved and the holes and false prospects for moving targets are reduced. Through multiple sets of data tests, it shows that the algorithm improves the adaptability and robustness of dynamic background and complex background.
  • 加载中
  • [1] Ueng S K, Chen G Z. Vision based multi-user human computer interaction[J]. Multimedia Tools and Applications, 2016, 75(16): 10059-10076. doi: 10.1007/s11042-015-3061-z

    CrossRef Google Scholar

    [2] 刘行, 陈莹.自适应多特征融合目标跟踪[J].光电工程, 2016, 43(3): 58-65. doi: 10.3969/j.issn.1003-501X.2016.03.010

    CrossRef Google Scholar

    Liu X, Chen Y. Target tracking based on adaptive fusion of multi-feature[J]. Opto-Electronic Engineering, 2016, 43(3): 58-65. doi: 10.3969/j.issn.1003-501X.2016.03.010

    CrossRef Google Scholar

    [3] Piccardi M. Background subtraction techniques: a review[C]//Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, Netherlands, 2004: 3099-3104.https://ieeexplore.ieee.org/document/1400815

    Google Scholar

    [4] Lipton A J, Fujiyoshi H, Patil R S. Moving target classification and tracking from real-time video[C]//Proceedings of the 4th IEEE Workshop on Applications of Computer Vision. WACV'98, Princeton, NJ, USA, 1998: 8-14.https://ieeexplore.ieee.org/document/732851

    Google Scholar

    [5] Barron J L, Fleet D J, Beauchemin S. Performance of optical flow techniques[J]. International Journal of Computer Vision, 1994, 12(1): 43-77. doi: 10.1007/BF01420984

    CrossRef Google Scholar

    [6] Dikmen M, Huang T S. Robust estimation of foreground in surveillance videos by sparse error estimation[C]//Proceedings of the 19th International Conference on Pattern Recognition, Tampa, USA, 2008: 1-4.https://ieeexplore.ieee.org/document/4761910

    Google Scholar

    [7] Xue G J, Song L, Sun J, et al. Foreground estimation based on robust linear regression model[C]//Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 2011: 3269-3272.https://ieeexplore.ieee.org/document/6116368

    Google Scholar

    [8] Xue G J, Song L, Sun J. Foreground estimation based on linear regression model with fused sparsity on outliers[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(8): 1346-1357. doi: 10.1109/TCSVT.2013.2243053

    CrossRef Google Scholar

    [9] 张金敏, 王斌.光照快速变化条件下的运动目标检测[J].光电工程, 2016, 43(2): 14-21. doi: 10.3969/j.issn.1003-501X.2016.02.003

    CrossRef Google Scholar

    Zhang J M, Wang B. Moving object detection under condition of fast illumination change[J]. Opto-Electronic Engineering, 2016, 43(2): 14-21. doi: 10.3969/j.issn.1003-501X.2016.02.003

    CrossRef Google Scholar

    [10] 李飞, 张小洪, 赵晨丘, 等.自适应的SILTP算法在运动车辆检测中的研究[J].计算机科学, 2016, 43(6): 294-297.

    Google Scholar

    Li F, Zhang X H, Zhao C Q, et al. Vehicle detection research based on adaptive SILTP algorithm[J]. Computer Science, 2016, 43(6): 294-297.

    Google Scholar

    [11] 王永忠, 梁彦, 潘泉, 等.基于自适应混合高斯模型的时空背景建模[J].自动化学报, 2009, 35(4): 371-378.

    Google Scholar

    Wang Y Z, Liang Y, Pan Q, et al. Spatiotemporal background modeling based on adaptive mixture of Gaussians[J]. Acta Automatica Sinica, 2009, 35(4): 371-378.

    Google Scholar

    [12] 范文超, 李晓宇, 魏凯, 等.基于改进的高斯混合模型的运动目标检测[J].计算机科学, 2015, 42(5): 286-288, 319.

    Google Scholar

    Fan W C, Li X Y, Wei K, et al. Moving target detection based on improved Gaussian mixture model[J]. Computer Science, 2015, 42(5): 286-288, 319.

    Google Scholar

    [13] 霍东海, 杨丹, 张小洪, 等.一种基于主成分分析的Codebook背景建模算法[J].自动化学报, 2012, 38(4): 591-600.

    Google Scholar

    Huo D H, Yang D, Zhang X H, et al. Principal component analysis based Codebook background modeling algorithm[J]. Acta Automatica Sinica, 2012, 38(4): 591-600.

    Google Scholar

    [14] Barnich O, van Droogenbroeck M. ViBe: a universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011, 20(6): 1709-1724. doi: 10.1109/TIP.2010.2101613

    CrossRef Google Scholar

    [15] 张泽斌, 袁哓兵.一种改进反馈机制的PBAS运动目标检测算法[J].电子设计工程, 2017, 25(3): 35-40.

    Google Scholar

    Zhang Z B, Yuan X B. An improved PBAS algorithm for dynamic background[J]. Electronic Design Engineering, 2017, 25(3): 35-40.

    Google Scholar

    [16] Wang Y, Jodoin P M, Porikli F, et al. CDnet 2014: an expanded change detection benchmark dataset[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014: 393-400.https://ieeexplore.ieee.org/document/6910011

    Google Scholar

  • Overview: Background modeling of moving targets is one of the research focal points and difficulties in machine vision and intelligent video processing. Its goal is to extract the change regions from the video sequence and effectively detect the moving targets for follow-up research such as object tracking, target classification, and application understanding such as behavior analysis and behavior understanding plays an important role. The commonly used detection methods include frame difference method, optical flow method, and background difference method. The background difference method has the advantages of small overhead, high speed, high accuracy, and accurate target extraction. It has become the most common method for detecting moving targets. The detection performance of the background difference method mainly depends on a robust background model. The background model establishment and update algorithm directly affects the detection effect of the final target. In order to build a robust background model and improve the accuracy of foreground detection, the temporal correlation of pixels in the same location of the video image and the spatial correlation of pixels in the neighborhood are considered comprehensively. This paper proposed a background modeling method based on multi-feature fusion. The rapid establishment of the initial background model with the first frame of the video sequence reduces the complexity of modeling sampling. The background model is updated using the video image sequence pixel values, frequency, update time, and adaptive sensitivity, wherein the adaptive sensitivity uses the feedback information of the pixel level background to adaptively acquire sensitivity for regions of different complexity to adapt to different complexity backgrounds. The high complexity background area has a high sensitivity, avoids the generation of erroneous front sights, and has a low complexity background area with less sensitivity and reduces misidentification of background points. The algorithm effectively improves the ghost phenomenon through multiple features, reducing the holes in the moving object in the foreground and the false foreground caused by pixel drift. In order to verify the effectiveness and practicability of the proposed algorithm, four background modeling algorithms, CodeBook, MOG, PBAS and ViBe, were selected for comparison experiments. Experiments selected Bootstrap, TimeOfDay, and WavingTrees in the Microsoft Wallflower paper dataset, highway, canoe, fountain02 in the CDNet2014 dataset, and were divided into three types of scene test algorithms: indoor, outdoor, and complex backgrounds. The test results show that this algorithm improves the adaptability and robustness of dynamic background and complex background.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(3)

Tables(3)

Article Metrics

Article views(7857) PDF downloads(2369) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint