Zhang Botao, Zhong Chaoliang, Wu Qiuxuan. A target localization method with monocular hand-eye vision[J]. Opto-Electronic Engineering, 2018, 45(5): 170696. doi: 10.12086/oee.2018.170696
Citation: Zhang Botao, Zhong Chaoliang, Wu Qiuxuan. A target localization method with monocular hand-eye vision[J]. Opto-Electronic Engineering, 2018, 45(5): 170696. doi: 10.12086/oee.2018.170696

A target localization method with monocular hand-eye vision

    Fund Project: Supported by National Natural Science Foundation of China (61503108) and Natural Science Foundation of Zhejiang Province (LY17F030022)
More Information
  • The installation of binocular vision at the end of a manipulator reduces its availability in environments with obstacles. To deal with the problem, this study puts forward a target localization method using a laser and the monocular hand-eye vision. In the proposed method, the centre of the laser spot is obtained by the hand-eye vision, and the geometric relations among the laser emission point, light-spot and the optical axis of the camera are used to calculate the distance. Then, the D-H method is employed to construct the coordinate conversion system, so that the location of the target can be calculated. The measuring precision is negatively correlated with the distance, and it is suitable for the measurement in medium or short distance. Compared with the commonly used binocular measurement methods, the proposed method uses fewer cameras, which reduces the width of the measurement system on manipulators, and makes it more applicable to narrow workspace. Moreover, it also improves the effective load capacity of manipulators.
  • 加载中
  • [1] Siegwart R, Nourbakhsh I R. Introduction to autonomous mobile robots[M]. Cambridge, MA: MIT Press, 2004: 3-45.

    Google Scholar

    [2] Chaudhury A, Ward C, Talasaz A, et al. Computer vision based autonomous robotic system for 3D plant growth measurement[C]//Proceedings of the IEEE 12th Conference on Computer and Robot Vision (CRV), Halifax, Canada, 2015: 290-296.

    Google Scholar

    [3] Iguchi Y, Yamaguchi J. Omni-directional 3D measurement using double fish-eye stereo vision[C]//Proceedings of the IEEE 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), Mokpo, Korea, 2015: 1-6.

    Google Scholar

    [4] 黄文有, 徐向民, 吴凤岐, 等.核环境水下双目视觉立体定位技术研究[J].光电工程, 2016, 43(12): 28-33.

    Google Scholar

    Huang W Y, Xu X M, Wu F Q, et al. Research of underwater binocular vision stereo positioning technology in nuclear condition[J]. Opto-Electronic Engineering, 2016, 43(12): 28-33.

    Google Scholar

    [5] He T, Chen J Y, Hu X, et al. A study of 3D coordinate measuring based on binocular stereo vision[J]. Applied Mechanics and Materials, 2015, 740: 531-534. doi: 10.4028/www.scientific.net/AMM.740

    CrossRef Google Scholar

    [6] 魏少鹏, 严惠民, 张秀达.一种深度相机与双目视觉结合的视差估计技术[J].光电工程, 2015, 42(7): 72-77.

    Google Scholar

    Wei S P, Yan H M, Zhang X D. Disparity estimation based on the combination of depth camera and stereo vision[J]. Opto-Electronic Engineering, 2015, 42(7): 72-77.

    Google Scholar

    [7] An X C, Hong W, Xia H. Research on binocular vision absolute localization method for indoor robots based on natural landmarks[C]//Proceedings of 2015 IEEE Chinese Automation Congress (CAC), Wuhan, 2015: 604-609.

    Google Scholar

    [8] Nefti-Meziani S, Manzoor U, Davis S, et al. 3D Perception from binocular vision for a low cost humanoid robot NAO[J]. Robotics and Autonomous Systems, 2015, 68: 129-139. doi: 10.1016/j.robot.2014.12.016

    CrossRef Google Scholar

    [9] Li H, Li B, Xu W F. Development of a remote-controlled mobile robot with binocular vision for environment monitoring[C]//Proceedings of 2015 IEEE International Conference on Information and Automation, Lijiang, 2015: 737-742.

    Google Scholar

    [10] Urmson C, Anhalt J, Bagnell D, et al. Autonomous driving in urban environments: boss and the urban challenge[J]. Journal of Field Robotics, 2008, 25(8): 425-466. doi: 10.1002/rob.v25:8

    CrossRef Google Scholar

    [11] Fanello S R, Pattacini U, Gori I, et al. 3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots[C]//Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Madrid, Spain, 2014: 1028-1035.

    Google Scholar

    [12] Chao F, Lee M H, Jiang M, et al. An infant development-inspired approach to robot hand-eye coordination[J]. International Journal of Advanced Robotic Systems, 2014, 11(2): 15. doi: 10.5772/57555

    CrossRef Google Scholar

    [13] Henry P, Krainin M, Herbst E, et al. RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. The International Journal of Robotics Research, 2012, 31(5): 647-663. doi: 10.1177/0278364911434148

    CrossRef Google Scholar

    [14] 黄风山. 光笔式单摄像机三维坐标视觉测量系统关键技术的研究[D]. 天津: 天津大学, 2005.

    Google Scholar

    Huang F S. Study on the key technique of single camera 3D coordinate vision measurement system using a light pen[D]. Tianjin: Tianjin University, 2005.http://cdmd.cnki.com.cn/Article/CDMD-10056-2007078600.htm

    Google Scholar

    [15] Aroca R V, Burlamaqui A F, Gon alves L M G. Method for reading sensors and controlling actuators using audio interfaces of mobile devices[J]. Sensors, 2012, 12(2): 1572-1593. doi: 10.3390/s120201572

    CrossRef Google Scholar

    [16] Xu D, Calderon C A A, Gan J Q, et al. An analysis of the Inverse Kinematics for a 5-DOF manipulator[J]. International Journal of Automation and Computing, 2005, 2(2): 114-124. doi: 10.1007/s11633-005-0114-1

    CrossRef Google Scholar

    [17] Craig J J. Introduction to robotics: mechanics and control[M]. 3rd ed. London: Pearson Education, 2005: 62-120.

    Google Scholar

    [18] da Graça Marcos M, Machado J A T, Azevedo-Perdicoulis T P. An evolutionary approach for the motion planning of redundant and hyper-redundant manipulators[J]. Nonlinear Dynamics, 2010, 60(1-2): 115-129. doi: 10.1007/s11071-009-9584-y

    CrossRef Google Scholar

  • Overview: Over the past decade, vision-based positioning technology has attracted more and more attentions, and has been widely used in robotics. Binocular vision is often installed at the end of a manipulator, which is used to get the position and the orientation information of targets. However, the installation of binocular vision reduces the flexibility and the load capacity of a manipulator. This problem becomes more obvious, when the load capacity of a manipulator is low or the working space is narrow. Moreover, the price of binocular vision is still relatively high. To deal with the problem above, this study puts forward a target localization method using a laser and a monocular hand-eye vision. The lower priced laser equipment used in this study can only send out a light beam, and cannot measure the distance independently. The hand-eye vision system is used to obtain the centre of the laser spot. The geometric relations among the laser emission point, light-spot and the optical axis of the camera are applied to calculate the distance from the target point to the laser emitter. The Denavit–Hartenberg convention (D–H) is often used to calculate the position and the orientation of links and joints in robotics. The distance from the target point to the laser emitter can be considered as an extended link of the manipulator. Under this assumption, the D-H method can be employed to construct the coordinate conversion system, which contains the beam of the laser and the mechanical manipulator. With this coordinate conversion system, the location of the target can be calculated. The coordinate measuring precision is negatively correlated with the distance, and it is suitable for the position measurement of medium and short distance. When a target is far away, the error is too large that it cannot work effectively. The light illuminations in the working environment have an impact on the laser spot taken by the camera. Compared with the commonly used binocular measurement methods, the proposed method uses only one camera, which reduces the width of the measurement system on manipulators, and makes it more suitable for working in narrow workspace. When searching for an object with a mobile robot, the arm is often required to enter a hole or a narrow gap. The method proposed in this paper is especially suitable for the above case. Moreover, this design also reduces the weight of the sensor on the manipulator that improves the effective load capacity of manipulators.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(9)

Tables(3)

Article Metrics

Article views(7805) PDF downloads(4665) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint