基于物体表面形貌的单相机视觉位姿测量方法

关印, 王向军, 阴雷, 等. 基于物体表面形貌的单相机视觉位姿测量方法[J]. 光电工程, 2018, 45(1): 170522. doi: 10.12086/oee.2018.170522
引用本文: 关印, 王向军, 阴雷, 等. 基于物体表面形貌的单相机视觉位姿测量方法[J]. 光电工程, 2018, 45(1): 170522. doi: 10.12086/oee.2018.170522
Guan Yin, Wang Xiangjun, Yin Lei, et al. Monocular position and pose measurement method based on surface topography of object[J]. Opto-Electronic Engineering, 2018, 45(1): 170522. doi: 10.12086/oee.2018.170522
Citation: Guan Yin, Wang Xiangjun, Yin Lei, et al. Monocular position and pose measurement method based on surface topography of object[J]. Opto-Electronic Engineering, 2018, 45(1): 170522. doi: 10.12086/oee.2018.170522

基于物体表面形貌的单相机视觉位姿测量方法

  • 基金项目:
    国家自然科学基金(51575388)资助项目
详细信息
    作者简介:
  • 中图分类号: TH391

Monocular position and pose measurement method based on surface topography of object

  • Fund Project: Supported by National Natural Science Foundation (51575388)
More Information
  • 为了获取在风洞实验中运动物体的位姿变化,提出了一种融合物体表面三维形貌信息的单相机视觉位姿测量方法。该方法以多点透视成像原理作为求解物体位姿变化的基础,以物体的图像特征角点作为特征点,并利用物体表面三维形貌模型信息获得特征点的三维坐标。通过实验完成了该测量方法的精度验证,在400 mm的观察距离上,位移平均测量误差为0.03 mm,均方根误差为0.234 mm;俯仰角、偏航角与滚转角的平均误差分别为0.08°、0.1°与0.09°,均方根误差分别为0.485°、0.312°与0.442°。实验结果表明该方法有可用于实用的测量精度。

  • Overview: In order to obtain the change of posture of moving objects in wind tunnel experiment, this paper presents a single camera pose and position measurement method which integrates the three-dimensional topography of object surface. The traditional monocular visual pose measurement method has to install optical mark points on the object. The 3D coordinate of the mark point has been determined at the time of installation. Then get the image coordinate of the optical mark point from pictures to calculate the pose change of the object. The disadvantages of the traditional calculation method are the complicated steps, the number of mark points is too few and they can easily be blocked, and they will distort the surface structure of the object. The surface of the measured object cannot install optical mark point, so the method needs to use the object's own image properties to set feature points.

    The measurement method proposed takes multi-point perspective imaging theory as the basis for solving the pose change of objects, takes the image feature corner of the object as the feature point, and then obtains the three-dimensional coordinates of the feature points by using the three-dimensional topography model of the object surface. The three-dimensional topography model of an object is obtained using the SFM multi-view 3D reconstruction method. Finally, the RPnP algorithm is used to calculate the image coordinates and the three-dimensional coordinates of the feature points to obtain the pose change of the object.

    The basic principle of pose solution is introduced. The process of SFM reconstruction, feature point matching and filtering process based on grid motion estimation are introduced briefly. The method of using 3D surface topography model to calculate the image feature corner's 3D coordinates is described in detail. And analyze the characteristics of three-dimensional coordinates of the extraction accuracy.

    The experiment is carried out under laboratory conditions to verify the accuracy of the measurement method. At the observation distance of 400 mm, the error of the average displacement measurement is 0.03 mm and the root mean square error is 0.234 mm. The average error of pitch angle, yaw angle and roll angle are 0.08°, 0.1° and 0.09°, RMSE are 0.485°, 0.312° and 0.442°. Experimental results show that the method can be used for practical measurement accuracy.

  • 加载中
  • 图 1  运动中的物体与相机的位姿关系

    Figure 1.  The relationship between the target and the camera before and after movement

    图 2  SFM重建流程

    Figure 2.  SFM reconstruction process

    图 3  三维重建得到的待测物体点云

    Figure 3.  The target's point cloud using 3D reconstruction

    图 4  三维坐标误差仿真实验结果

    Figure 4.  Simulation results of 3D coordinate error

    图 5  匹配结果对比。(a) GMS匹配结果;(b) RANSAC匹配结果

    Figure 5.  Comparison of matching results. (a) GMS matching results; (b) RANSAC matching results

    图 6  待测飞行器模型

    Figure 6.  The aircraft model to be tested

    图 7  转台实验

    Figure 7.  The experiment on turntable

    图 8  姿态角测量误差结果。(a)偏航角测量误差;(b)滚转角测量误差;(c)俯仰角测量误差

    Figure 8.  Angle measurement error results. (a) Yaw angle measurement error; (b) Roll angle measurement error; (c) Pitch angle measurement error

    图 9  位移测量实验

    Figure 9.  Distance measurement experiment

    图 10  位移测量实验误差结果

    Figure 10.  Distance measurement experiment results

    表 1  特征点的重建误差表

    Table 1.  Reconstruction error of feature points

    x/mm y/mm z/mm d/mm
    mean -0.001 0.002 0.001 0.154
    RMSE 0.081 0.110 0.101 0.169
    下载: 导出CSV

    表 2  相机内参标定结果

    Table 2.  Calibration result of the camera

    fx/pixels fy/pixels u0/pixels v0/pixels kc1 kc2
    3439.7 3439.4 676.9 562.9 -0.086 0.442
    下载: 导出CSV
  • [1]

    苗锡奎, 朱枫, 郝颖明.多像机非共视场的非合作飞行器位姿测量方法[J].红外与激光工程, 2013, 42 (3): 709-715. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=hwyjggc201303042&f=datatang

    Miao X K, Zhu F, Hao Y M. Pose measurement method for non-cooperative space vehicle using multiple non-overlapping cameras[J]. Infrared and Laser Engineering, 2013, 42 (3): 709-715. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=hwyjggc201303042&f=datatang

    [2]

    刘巍, 陈玲, 马鑫, 等.基于彩色图像的高速目标单目位姿测量方法[J].仪器仪表学报, 2016, 37 (3): 675-682. http://www.cqvip.com/QK/94550X/201603/668422036.html

    Liu W, CHEN L, Ma X, et al. Monocular position and pose measurement method for high-speed targets based on colored images[J]. Chinese Journal of Scientific Instrument, 2016, 37 (3): 675-682. http://www.cqvip.com/QK/94550X/201603/668422036.html

    [3]

    宋薇, 周扬.基于CAD模型的单目六自由度位姿测量[J].光学 精密工程, 2016, 24 (4): 882-891. http://industry.wanfangdata.com.cn/dl/Detail/Periodical?id=Periodical_gxjmgc201604025

    Song W, Zhou Y. Estimation of monocular vision 6-DOF pose based on CAD model[J]. Optics and Precision Engineering, 2016, 24 (4): 882-891. http://industry.wanfangdata.com.cn/dl/Detail/Periodical?id=Periodical_gxjmgc201604025

    [4]

    Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient alternative to SIFT or SURF[C]//Proceedings of 2011 IEEE International Conference on Computer Vision, 2012: 2564-2571.http://ieeexplore.ieee.org/abstract/document/6126544/

    [5]

    Li S Q, Xu C, Xie M. A Robust O(n) solution to the perspective-n-point problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34 (7): 1444-1450. doi: 10.1109/TPAMI.2012.41

    [6]

    Heinly J, Dunn E, Frahm J M. Correcting for duplicate scene structure in sparse 3D reconstruction[C]//Proceedings of the 13th European Conference on Computer Vision, 2014: 780-795.https://link.springer.com/chapter/10.1007/978-3-319-10593-2_51

    [7]

    Wu C C. Towards linear-time incremental structure from motion[C]//Proceedings of 2013 International Conference on 3D Vision-3DV, 2013: 127-134.http://ieeexplore.ieee.org/abstract/document/6599068/

    [8]

    Hartley R, Zisserman A. Multiple View Geometry in Computer Vision[M]. 2nd ed. Cambridge: Cambridge University Press, 2003.

    [9]

    Triggs B, Mclauchlan P F, Hartley R I, et al. Bundle adjustment-a modern synthesis[C]//Proceedings of the International Workshop on Vision Algorithms: Theory and Practice, 1999: 298-372.https://link.springer.com/chapter/10.1007/3-540-44480-7_21

    [10]

    Furukawa Y, Ponce J. Accurate, dense, and robust multiview stereopsis[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007: 1-8.http://ieeexplore.ieee.org/abstract/document/5226635/

    [11]

    于雅楠. 微型旋翼飞行体自适应气动外形抗扰动特性研究[D]. 天津: 天津大学, 2012.

    Yu Y N. Research on anti-disturbance performance of adaptive aerodynamic shape for hovering micro air vehicle[D]. Tianjin: Tianjin University, 2012.http://cdmd.cnki.com.cn/Article/CDMD-10056-1013039498.htm

    [12]

    Lowe D G. Object recognition from local scale-invariant features[C]//Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, 2: 1150-1157http://ieeexplore.ieee.org/abstract/document/790410/

    [13]

    Men H, Gebre B, Puchiraju K. Color point cloud registration with 4D ICP algorithm[C]//Proceedings of 2011 IEEE International Conference on Robotics and Automation, 2011: 1511-1516.http://ieeexplore.ieee.org/abstract/document/5980407/

    [14]

    Bian J W, Lin W Y, Matsushita Y, et al. GMS: grid-based motion statistics for fast, ultra-robust feature correspondence[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 2828-2837.

    [15]

    Mur-artal R, Montiel J M M, Tardos J D. Orb-slam: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31 (5): 1147-1163. doi: 10.1109/TRO.2015.2463671

    [16]

    Mur-artal R, Tardos J D. Orb-slam2: an open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33 (5): 1255-1262. doi: 10.1109/TRO.2017.2705103

  • 加载中

(10)

(2)

计量
  • 文章访问数:  7312
  • PDF下载数:  3823
  • 施引文献:  0
出版历程
收稿日期:  2017-09-27
修回日期:  2017-11-17
刊出日期:  2018-01-15

目录

/

返回文章
返回