基于深度学习的真实尺度运动恢复结构方法

陈朋, 任金金, 王海霞, 等. 基于深度学习的真实尺度运动恢复结构方法[J]. 光电工程, 2019, 46(12): 190006. doi: 10.12086/oee.2019.190006
引用本文: 陈朋, 任金金, 王海霞, 等. 基于深度学习的真实尺度运动恢复结构方法[J]. 光电工程, 2019, 46(12): 190006. doi: 10.12086/oee.2019.190006
Chen Peng, Ren Jinjin, Wang Haixia, et al. Equal-scale structure from motion method based on deep learning[J]. Opto-Electronic Engineering, 2019, 46(12): 190006. doi: 10.12086/oee.2019.190006
Citation: Chen Peng, Ren Jinjin, Wang Haixia, et al. Equal-scale structure from motion method based on deep learning[J]. Opto-Electronic Engineering, 2019, 46(12): 190006. doi: 10.12086/oee.2019.190006

基于深度学习的真实尺度运动恢复结构方法

  • 基金项目:
    国家自然科学基金资助项目(61527808, 61602414);杭州市重大科技创新专项项目(20172011A027)
详细信息
    作者简介:
    通讯作者: 陈朋, E-mail: chenpeng@zjut.edu.cn
  • 中图分类号: TP391

Equal-scale structure from motion method based on deep learning

  • Fund Project: Supported by National Natural Science Foundation of China (61527808, 61602414) and Hangzhou Major Science and Technology Innovation Project (20172011A027)
More Information
  • 传统的多视图几何方法获取场景结构存在两个问题:一是因图片模糊和低纹理带来的特征点误匹配,从而导致重建精度降低;二是单目相机缺少尺度信息,重建结果只能确定未知的比例因子,无法获取准确的场景结构。针对这些问题本文提出一种基于深度学习的真实尺度运动恢复结构方法。首先使用卷积神经网络获取图片的深度信息;接着为了恢复单目相机的尺度信息,引入惯性传感单元(IMU),将IMU获取的加速度和角速度与ORB-SLAM2获取的相机位姿进行时域和频域上的协同,在频域中获取单目相机的尺度信息;最后将图片的深度图和具有尺度因子的相机位姿进行融合,重建出场景的三维结构。实验表明,使用Depth CNN网络获取的单目图像深度图解决了多层卷积池化操作输出图像分辨率低和缺少重要特征信息的问题,绝对值误差达到了0.192,准确率高达0.959;采用多传感器融合的方法,在频域上获取单目相机的尺度能够达到0.24 m的尺度误差,相比于VIORB方法获取的相机尺度精度更高;重建的三维模型与真实大小具有0.2 m左右的误差,验证了本文方法的有效性。

  • Overview: With the continuous development of technologies such as computer vision, virtual reality, and multimedia communication, it is necessary to realize the targets of real-time obstacle avoidance, robot autonomous navigation, and unmanned driving, so that the equipment can more accurately recognize and understand the surrounding environment. Obtaining a real-sized 3D model is a tendency in the future. The traditional three-dimensional structure restoration methods rely too much on geometric calculations in obtaining image information and camera attitude information from two-dimensional images, which is difficult to play a good role in the absence of little texture, complicated geometric conditions, and monotonous structure. With the development of computer vision, the use of deep learning network to learn pictures and extract hierarchical features has been successfully applied to depth estimation, camera pose estimation, and three-dimensional structure recovery. Meanwhile, acquiring 3D models of real scale of objects is a problem that has always been explored in the field of computer vision.

    Two problems exist in the traditional multi-view geometry method to obtain the three-dimensional structure of the scene. First, the mismatching of the feature points caused by the blurred image and low texture, which reduces the accuracy of reconstruction; second, as the information obtained by monocular camera is lack of scale, the reconstruction results can only determine the unknown scale factor, and cannot get accurate scene structure. This paper proposes a method of equal-scale motion restoration structure based on deep learning. First, the convolutional neural network is used to obtain the depth information of the image; then, to restore the scale information of the monocular camera, an inertial measurement unit (IMU) is introduced, and the acceleration and angular velocity acquired by the IMU and the camera position acquired by the ORB-SLAM2 are demonstrated. The pose is coordinated in the both time domain and frequency domain, and the scale information from the monocular camera is acquired in the frequency domain; finally, the depth information of the image and the camera pose with the scale factor are merged to reconstruct the three-dimensional structure of the scene. Experiments show that the monocular image depth map obtained by the Depth CNN network solves the problem that the output image of the multi-level convolution pooling operation has low resolution and lacks important feature information, and the absolute value error reaches 0.192, and the accuracy rate is up to 0.959. The multi-sensor fusion method can achieve a scale error of 0.24 m in the frequency domain, which is more accurate than that of the VIORB method in the frequency domain. The error between the reconstructed 3D model and the real size is about 0.2 m, which verifies the effectiveness of the proposed method.

  • 加载中
  • 图 1  基于深度学习真实尺度运动恢复结构流程图

    Figure 1.  Equal scale structure from motion based on deep learning

    图 2  深度图网络结构

    Figure 2.  Network architecture for depth

    图 3  实验平台

    Figure 3.  The experiment platform

    图 4  博弈雕塑的Depth CNN预测效果图。(a)原图;(b) Depth CNN预测深度图;(c)真实图;(d) Godard等[21]预测深度图

    Figure 4.  Predictions on sculpture. (a) Origin image; (b) Depth CNN; (c) Ground truth; (d) Godard et al[21]

    图 5  运动轨迹对比图。(a) ORB-SLAM2;(b) VIORB;(c)本文方法

    Figure 5.  Comparison of trajectory. (a) ORB-SLAM2; (b) VIORB; (c) Ours method

    图 6  博弈雕塑真实尺度模型

    Figure 6.  Equal scale model of sculpture

    表 1  深度估计误差表

    Table 1.  Errors of depth prediction

    Method Dataset Abs_rel Sq_rel RMS Log_RMS a1 < 1.25 a2 < 1.252 a3 < 1.253
    Depth CNN Kitti 0.208 1.768 6.856 0.283 0.678 0.885 0.975
    Godard等[21] Kitti 0.148 1.344 5.927 0.247 0.803 0.922 0.964
    Depth CNN Ours 0.192 1.576 6.857 0.2737 0.737 0.903 0.959
    Godard等[21] Ours 0.213 3.819 8.519 0.322 0.758 0.889 0.943
    下载: 导出CSV

    表 2  运动轨迹误差表

    Table 2.  Errors of trajectory

    RMSE Maximum Minimum Median Mean
    ORB-SLAM2[12] 1.207 1.8631 0.5493 1.1391 1.6763
    VIORB[17] 0.2672 0.4675 0.1295 0.2538 0.2594
    本文方法 0.1174 0.2435 0.00422 0.1188 0.1112
    下载: 导出CSV

    表 3  真实尺度模型的对比表

    Table 3.  Comparison of real size reconstruction

    Height/m Width/m
    Reconstruction size 2.16 4.38
    Real size 1.9 4.5
    下载: 导出CSV
  • [1]

    刘钢, 彭群生, 鲍虎军.基于多幅图像的场景交互建模系统[J].计算机辅助设计与图形学学报, 2004, 16(10): 1419-1424, 1429. doi: 10.3321/j.issn:1003-9775.2004.10.017

    Liu G, Peng Q S, Bao H J. An interactive modeling system from multiple images[J]. Journal of Computer-Aided Design & Computer Graphics, 2004, 16(10): 1419-1424, 1429. doi: 10.3321/j.issn:1003-9775.2004.10.017

    [2]

    曹天扬, 蔡浩原, 方东明, 等.基于视觉内容匹配的机器人自主定位系统[J].光电工程, 2017, 44(5): 523-533. doi: 10.3969/j.issn.1003-501X.2017.05.008 http://www.oejournal.org/J/OEE/Article/Details/A170830000017/CN

    Cao T Y, Cai H Y, Fang D M, et al. Robot vision localization system based on image content matching[J]. Opto-Electronic Engineering, 2017, 44(5): 523-533. doi: 10.3969/j.issn.1003-501X.2017.05.008 http://www.oejournal.org/J/OEE/Article/Details/A170830000017/CN

    [3]

    Tomasi C, Kanade T. Shape and motion from image streams under orthography: a factorization method[J]. International Journal of Computer Vision, 1992, 9(2): 137-154. http://d.old.wanfangdata.com.cn/NSTLQK/10.1007-BF00129684/

    [4]

    Pollefeys M, Koch R, van Gool L. Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters[J]. International Journal of Computer Vision, 1999, 32(1): 7-25. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=7849273aab77dec70570d15320209554

    [5]

    戴嘉境.基于多幅图像的三维重建理论及算法研究[D].上海: 上海交通大学, 2012.

    Dai J J. Research on the theory and algorithms of 3D reconstruction from multiple images[D]. Shanghai: Shanghai Jiao Tong University, 2012.

    [6]

    张涛.基于单目视觉的三维重建[D].西安: 西安电子科技大学, 2014.

    Zhang T. 3D reconstruction based on monocular vision[D]. Xi'an: Xidian University, 2014.

    [7]

    许允喜, 陈方.基于多帧序列运动估计的实时立体视觉定位[J].光电工程, 2016, 43(2): 89-94. doi: 10.3969/j.issn.1003-501X.2016.02.015 http://www.cnki.com.cn/Article/CJFDTotal-GDGC201602016.htm

    Xu Y X, Chen F. Real-time stereo visual localization based on multi-frame sequence motion estimation[J]. Opto-Electronic Engineering, 2016, 43(2): 89-94. doi: 10.3969/j.issn.1003-501X.2016.02.015 http://www.cnki.com.cn/Article/CJFDTotal-GDGC201602016.htm

    [8]

    黄文有, 徐向民, 吴凤岐, 等.核环境水下双目视觉立体定位技术研究[J].光电工程, 2016, 43(12): 28-33. doi: 10.3969/j.issn.1003-501X.2016.12.005 http://www.oejournal.org/J/OEE/Article/Details/A180808000013/CN

    Huang W Y, Xu X M, Wu F Q, et al. Research of underwater binocular vision stereo positioning technology in nuclear condition[J]. Opto-Electronic Engineering, 2016, 43(12): 28-33. doi: 10.3969/j.issn.1003-501X.2016.12.005 http://www.oejournal.org/J/OEE/Article/Details/A180808000013/CN

    [9]

    Yi K M, Trulls E, Lepetit V, et al. LIFT: learned invariant feature transform[C]//Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 467-483.

    [10]

    He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//Proceedings of 2016 Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 770-778.

    [11]

    Newell A, Yang K Y, Deng J. Stacked hourglass networks for human pose estimation[C]//Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 483-499.

    [12]

    Zhou T H, Brown M, Snavely N, et al. Unsupervised learning of depth and ego-motion from video[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6612-6619.

    [13]

    Newcombe R A, Izadi S, Hilliges O, et al. Kinect Fusion: real-time dense surface mapping and tracking[C]//Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 2011: 127-136.

    [14]

    Usenko V, Engel J, Stückler J, et al. Direct visual-inertial odometry with stereo cameras[C]//Proceedings of 2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 2016: 1885-1892.

    [15]

    Concha A, Loianno G, Kumar V, et al. Visual-inertial direct SLAM[C]//Proceedings of 2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 2016: 1331-1338.

    [16]

    Ham C, Lucey S, Singh S. Hand waving away scale[C]//Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 279-293.

    [17]

    Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 796-803. doi: 10.1109/LRA.2017.2653359

    [18]

    Mur-Artal R, Tardós J D. ORB-SLAM2: an open-source slam system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. doi: 10.1109/TRO.2017.2705103

    [19]

    Ham C, Lucey S, Singh S. Absolute scale estimation of 3d monocular vision on smart devices[M]//Hua G, Hua X S. Mobile Cloud Visual Media Computing: From Interaction to Service. New York: Springer International Publishing, 2015: 329-344.

    [20]

    Mustaniemi J, Kannala J, S rkk S, et al. Inertial-based scale estimation for structure from motion on mobile devices[C]//Proceedings of 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 2017: 4394-4401.

    [21]

    Godard C, Mac Aodha O, Brostow G J. Unsupervised monocular depth estimation with left-right consistency[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017: 6602-6610.

  • 加载中

(6)

(3)

计量
  • 文章访问数:  8478
  • PDF下载数:  2766
  • 施引文献:  0
出版历程
收稿日期:  2019-01-08
修回日期:  2019-02-18
刊出日期:  2019-12-01

目录

/

返回文章
返回