Liu Jun, Meng Weixiu, Yu Jie, et al. Design and implementation of DRFCN in-depth network for military target identification[J]. Opto-Electronic Engineering, 2019, 46(4): 180307. doi: 10.12086/oee.2019.180307
Citation: Liu Jun, Meng Weixiu, Yu Jie, et al. Design and implementation of DRFCN in-depth network for military target identification[J]. Opto-Electronic Engineering, 2019, 46(4): 180307. doi: 10.12086/oee.2019.180307

Design and implementation of DRFCN in-depth network for military target identification

    Fund Project: Supported by Naval Equipment Pre-research Innovation Project and National Natural Science Foundation of China (61333009, 61427808)
  • Automatic target recognition (ATR) technology has always been the key and difficult point in the military field. This paper designs and implements a new DRFCN in-depth network for military target identification. Firstly, the part of DRPN is densely connected by the convolution module to reuse the features of each layer in the deep network model to extract the high quality goals of sampling area; Secondly, in the DFCN part, we fuse the information of the semantic features of the high and low level feature maps to realize the prediction of target area and location information in the sampling area; Finally, the deep network model structure and the parameter training method of DRFCN are given. Further, we conduct experimental analysis and discussion on the DRFCN algorithm: 1) Based on the PASCAL VOC dataset for comparison experiments, the results show that DRFCN algorithm is obviously superior to the existing algorithm in terms of average accuracy, real-time and model size because of the convolution module dense connection method. At the same time, it is verified that the DRFCN algorithm can effectively solve the problem of gradient dispersion and gradient expansion. 2) Using the self-built military target dataset for experiments, the results show that the DRFCN algorithm implements the military target recognition task in terms of accuracy and real-time.
  • 加载中
  • [1] Nair D, Aggarwal J K. Bayesian recognition of targets by parts in second generation forward looking infrared images[J]. Image and Vision Computing, 2000, 18(10): 849-864. doi: 10.1016/S0262-8856(99)00084-0

    CrossRef Google Scholar

    [2] Crevier D, Lepage R. Knowledge-Based image understanding systems: A survey[J]. Computer Vision and Image Understanding, 1997, 67(2): 161-185.

    Google Scholar

    [3] Bartlett T A. Simplified IR signature prediction for model-based ATR[J]. Proceedings of SPIE, 1993, 1957: 111-121. doi: 10.1117/12.161430

    CrossRef Google Scholar

    [4] Watkins W R, CuQlock-Knopp V G, Jordan J B, et al. Sensor fusion: a preattentive vision approach[J]. Proceedings of SPIE, 2000, 4029: 59-67. doi: 10.1117/12.392556

    CrossRef Google Scholar

    [5] Rogers S K, Colombi J M, Martin C E, et al. Neural networks for automatic target recognition[J]. Neural Networks, 1995, 8(7-8): 1153-1184. doi: 10.1016/0893-6080(95)00050-X

    CrossRef Google Scholar

    [6] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems, 2012: 1097-1105.

    Google Scholar

    [7] Dai J F, Li Y, He K M, et al. R-FCN: object detection via region-based fully convolutional networks[C]//30th Conference on Neural Information Processing Systems, 2016: 379-387.

    Google Scholar

    [8] Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788.

    Google Scholar

    [9] Liu W, Anguelov D, Erhan D, et al. SSD: single shot MultiBox detector[C]//14th European Conference on Computer Vision, 2016: 21-37.

    Google Scholar

    [10] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014: 580-587.

    Google Scholar

    [11] He K M, Zhang X Y, Ren S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. doi: 10.1109/TPAMI.2015.2389824

    CrossRef Google Scholar

    [12] Grauman K, Darrell T. The pyramid match kernel: discriminative classification with sets of image features[C]//Tenth IEEE International Conference on Computer Vision, 2005, 2: 1458-1465.

    Google Scholar

    [13] Girshick R. Fast R-CNN[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision, 2015: 1440-1448.

    Google Scholar

    [14] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031

    CrossRef Google Scholar

    [15] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431-3440.10.1109/TPAMI.2016.2572683

    Google Scholar

    [16] 王正来, 黄敏, 朱启兵, 等.基于深度卷积神经网络的运动目标光流检测方法[J].光电工程, 2018, 45(8): 180027.

    Google Scholar

    Wang Z L, Huang M, Zhu Q B, et al. The optical flow detection method of moving target using deep convolution neural network[J]. Opto-Electronic Engineering, 2018, 45(8): 180027.

    Google Scholar

    [17] Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks, 1994, 5(2): 157-166. doi: 10.1109/72.279181

    CrossRef Google Scholar

    [18] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.10.1109/CVPR.2016.90

    Google Scholar

    [19] Srivastava R K, Greff K, Schmidhuber J. Training very deep networks[C]//Neural Information Processing Systems, 2015: 2377-2385.

    Google Scholar

    [20] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2261-2269.10.1109/CVPR.2017.243

    Google Scholar

    [21] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9.10.1109/CVPR.2015.7298594

    Google Scholar

    [22] Zagoruyko S, Komodakis N. Wide residual networks[C]//Proceedings of the British Machine Vision Conference, 2016, 87(7): 1-12.10.5244/C.30.87

    Google Scholar

    [23] Xie S N, Girshick R, Dollar P, et al. Aggregated residual transformations for deep neural networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5987-5995.

    Google Scholar

    [24] 谷雨, 徐英.基于随机卷积特征和集成超限学习机的快速SAR目标识别[J].光电工程, 2018, 45(1): 170432. doi: 10.3788/gzxb20114002.0289

    CrossRef Google Scholar

    Gu Y, Xu Y. Fast SAR target recognition based on random convolution features and ensemble extreme learning machines[J]. Opto-Electronic Engineering, 2018, 45(1): 170432. doi: 10.3788/gzxb20114002.0289

    CrossRef Google Scholar

    [25] Everingham M, Eslami S M A, Van Gool L, et al. The pascal visual object classes challenge: a retrospective[J]. International Journal of Computer Vision, 2015, 111(1): 98-136. doi: 10.1007/s11263-014-0733-5

    CrossRef Google Scholar

    [26] Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248-255.10.1109/CVPR.2009.5206848

    Google Scholar

  • Overview: Automatic target recognition (ATR) technology has always been the key and difficult point in the military field. Photoelectric detection is one of the key detection methods in modern early warning and detection information network. In actual combat, massive images and video data of different types, timings and resolutions can be obtained by optoelectronic devices. For these massive infrared images or visible light images, this paper designs and implements a DRFCN in-depth network for military target identification applications. Firstly, the DRFCN algorithm inputs images and the part of DRPN is densely connected by the convolution module to reuse the features of each layer in the deep network model to extract the high quality goals of sampling region; Secondly, in the DFCN part, we fuse the information of the semantic features of the high and low level feature maps to realize the prediction of target area and location information in the sampling area; Finally, the deep network model structure and the parameter training method of DRFCN are given. In the experimental analysis and discussion part: 1) Through a large number of experiments, we draw various types of LOSS curves and P-R curves to prove the convergence of the DRFCN algorithm. 2) On the pre-training classification model based on the ImageNet dataset, the DRFCN algorithm achieved 93.1% Top-5 accuracy, 76.1% Top-1 accuracy and the model size was 112.3 MB. 3) Based on the PASCAL VOC dataset, the accuracy of DRFCN algorithm is 75.3%, which is 5.4% higher than that of VGG16 network. The test time of the DRFCN algorithm is 0.12 s. Compared to VGG16, the test time was reduced by 0.3 s. The DRFCN algorithm has advantages over the existing algorithm. Therefore, it is superior to the existing depth learning based target recognition algorithm. At the same time, it is verified that the DRFCN algorithm can effectively solve the vanishing gradient and exploding gradient. 4) Using the self-made military target data set for experiments, the DRFCN algorithm has an accuracy rate of 77.5% and a test time of 0.20 s. Compared to the PASCAL VOC2007 dataset algorithm, the accuracy is increased by 2.2%. The time is reduced by 80 milliseconds. The results show that the DRFCN algorithm achieves the military target recognition task in accuracy and real-time. In summary, compared with the existing deep learning network, the comprehensive performance of the DRFCN algorithm is better. The DRFCN algorithm improves the recognition average accuracy, reduces the depth network model and effectively solves the vanishing gradient and exploding gradient.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(6)

Tables(4)

Article Metrics

Article views(8478) PDF downloads(3078) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint