Li H B, Sun Y, Zhang W M, et al. The detection method for coal dust caused by chute discharge based on YOLOv4-tiny[J]. Opto-Electron Eng, 2021, 48(6): 210049. doi: 10.12086/oee.2021.210049
Citation: Li H B, Sun Y, Zhang W M, et al. The detection method for coal dust caused by chute discharge based on YOLOv4-tiny[J]. Opto-Electron Eng, 2021, 48(6): 210049. doi: 10.12086/oee.2021.210049

The detection method for coal dust caused by chute discharge based on YOLOv4-tiny

    Fund Project: Natural Science Foundation of Hebei Province (F2019203195)
More Information
  • The coal port will produce dust in the process of unloading coal by the chute of the ship loader. In order to solve the problem of dust detection, this paper proposes a method of coal dust detection based on deep learning (YOLOv4-tiny). The improved YOLOv4-tiny network is used to train and test the dust data set of chute discharge. Because the detection algorithm cannot get the dust concentration, this paper divides the dust into four categories for detection, and finally counts the area of detection frames of the four categories of dust. After that, the dust concentration is approximately judged through the weighted sum calculation of these data. The experimental results show that the detection accuracy (AP) of four types of dust is 93.98%, 93.57%, 80.03% and 57.43%, the average detection accuracy (mAP) is 81.27% (which is close to 83.38% of YOLOv4), and the detection speed (FPS) is 25.1 (which is higher than 13.4 of YOLOv4). The algorithm can balance the speed and accuracy of dust detection, and can be used for real-time dust detection to improve the efficiency of suppressing coal dust generated by chute discharge.
  • 加载中
  • [1] 汪大春. 黄骅港粉尘治理技术研究[J]. 神华科技, 2017, 15(5): 89-92. doi: 10.3969/j.issn.1674-8492.2017.05.025

    CrossRef Google Scholar

    Wang D C. Study about dust controlling techniques in Huanghua port[J]. Shenhua Sci Technol, 2017, 15(5): 89–92. doi: 10.3969/j.issn.1674-8492.2017.05.025

    CrossRef Google Scholar

    [2] 姚筑宇. 基于深度学习的目标检测研究与应用[D]. 北京: 北京邮电大学, 2019.

    Google Scholar

    Yao Z Y. Research on the application of object detection technology based on deep learning algorithm[D]. Beijing: Beijing University of Posts and Telecommunications, 2019.

    Google Scholar

    [3] Frizzi S, Kaabi R, Bouchouicha M, et al. Convolutional neural network for video fire and smoke detection[C]//IECON 2016- 42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, 2016: 877–882.

    Google Scholar

    [4] Tao C Y, Zhang J, Wang P. Smoke detection based on deep convolutional neural networks[C]//2016 International Conference on Industrial Informatics - Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), Wuhan, 2016: 150–153.

    Google Scholar

    [5] Zhang F, Qin W, Liu Y B, et al. A Dual-Channel convolution neural network for image smoke detection[J]. Multimed Tools Appl, 2020, 79(45): 34587–34603.

    Google Scholar

    [6] Girshick R. Fast R-CNN[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015: 1440–1448.

    Google Scholar

    [7] Ren S Q, He K M, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Trans Pattern Anal Mach Intell, 2017, 39(6): 1137–1149.

    Google Scholar

    [8] Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016: 779–788.

    Google Scholar

    [9] Redmon J, Farhadi A. YOLO9000: Better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 2017: 6517–6525.

    Google Scholar

    [10] Redmon J, Farhadi A. YOLOv3: An Incremental improvement[Z]. arXiv: 1804.02767, 2018.

    Google Scholar

    [11] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: optimal speed and accuracy of object detection[Z]. arXiv: 2004.10934, 2020.

    Google Scholar

    [12] He K M, Zhang X Y, Ren S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Trans Pattern Anal Mach Intell, 2015, 37(9): 1904-1916. doi: 10.1109/TPAMI.2015.2389824

    CrossRef Google Scholar

    [13] Wang C Y, Liao H Y M, Chen P Y, et al. Enriching variety of layer-wise learning information by gradient combination[C]//2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, 2019: 2477–2484.

    Google Scholar

    [14] Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, 2020: 1571–1580.

    Google Scholar

    [15] Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 2017: 936–944.

    Google Scholar

    [16] Liu S, Qi L, Qin H F, et al. Path aggregation network for instance segmentation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 2018: 8759-8768.

    Google Scholar

    [17] Chollet F. Xception: deep learning with depthwise separable convolutions[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 2017: 1800-1807.

    Google Scholar

    [18] Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015: 1-9.

    Google Scholar

    [19] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016: 2818-2826.

    Google Scholar

    [20] Hu J, Shen L, Sun G. Squeeze-and-Excitation networks[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 2018: 7132-7141.

    Google Scholar

    [21] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016: 770-778.

    Google Scholar

  • Overview: In recent years, with the improvement of the public's environmental protection concept and the tightening of environmental protection policies, how to effectively reduce or quickly suppress the dust generated in the production process has become a problem that coal ports must face. It is the last link of the whole production link that the ship loader unloads the coal to the cargo ship through the chute. In the front production link of the coal ports, various dust suppression measures are adopted, such as wind proof and dust suppression nets, dry dust removal systems, and so on. Although these measures effectively reduce the dust generation when the coal spills into the cabin, the dust suppression measures in the early stage do not form a closed-loop control with the unloading link, and cannot automatically suppress the dust from the unloading link. Thus, the separate treatment of the unloading dust is still an important part of the whole environmental protection operation. At present, the main measure to suppress this kind of dust in domestic coal ports is watering. As the discharge dust only occurs occasionally, if the sprinkler dust removal device is always turned on, it will lead to excessive watering, which will undoubtedly reduce the actual coal loading of cargo ships and affect the economic benefits. If the dust removal device is manually controlled by workers on site, it is not conducive to the construction of unmanned ports. Therefore, it is necessary to develop an automatic real-time detection method of coal dust discharge. When the dust is detected, an early warning signal is sent to inform the dust suppression operation to take corresponding measures. The improved deep convolution neural network (YOLOv4-tiny) is used to train and test on the data set of dust, and then to learn its internal feature representation. The improvement measures include: a SERes module is proposed to strengthen the information interaction between the detection algorithm network channels; a XRes module is proposed to increase the depth and width of the algorithm network; a SPP module and a PRN module are added to enhance the feature fusion ability of the algorithm. Because the detection algorithm cannot get the dust concentration, this paper divides the dust into four categories for detection, and finally counts the total area of the detection frame of the four categories of dust. After that, the dust concentration is approximately judged by weighting and calculating these data. The experimental results show that the detection accuracy (AP) of four types of dust is 93.98%, 93.57%, 80.03% and 57.43%, the average detection accuracy (mAP) is 81.27% (which is close to 83.38% of YOLOv4), and the detection speed (FPS) is 25.1 (which is higher than 13.4 of YOLOv4). The algorithm can balance the speed and accuracy of dust detection, and can be used for real-time dust detection to improve the efficiency of suppressing coal dust generated by chute discharge.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(13)

Tables(3)

Article Metrics

Article views() PDF downloads() Cited by()

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint