Abstract:
The dynamic depth camera has a limited single-frame field of view, and there is noise disturbance when stitching multiple frames. To deal with the aforementioned problems, a large-scale 3D target pose measurement and reconstruction method based on multi-view fusion is presented. This approach builds a hierarchical model of the depth camera's performance gradient, predicts the pose with a multi-view scanning method based on point cloud normal vectors, and fits 3D models of targets with height constraints RANSAC (height constraints RANSAC, HC-RANSAC). The depth camera installed on the end of the robotic manipulator scans and measures the target from various angles, and the sampled data is utilized to reconstruct the target model in the local coordinate system. Experimental results reveal that when compared to fixed-depth cameras and classical reconstruction approaches based on pan-tilt vision, the proposed approach has a larger reconstruction field of view and higher reconstruction accuracy. It can reconstruct huge targets at a close range, and get an excellent balance between field of vision and precision.