• 摘要: 采用菱形像素的数字光处理 (DLP)投影仪投射的条纹边缘呈锯齿状。当投影仪像素尺寸远大于相机像素尺寸时会显著降低相位测量轮廓术中相位测量精度。尤其是投射垂直条纹图案时,条纹采样频率变为水平条纹的一半,并且锯齿状更明显。为提高采用菱形像素DLP投影仪的相位测量轮廓术 (PMP)系统的测量精度,提出了一种结合条纹像素重映射与深度学习的新方法。首先通过像素重映射设计了具有列交错菱形直线像素排列的条纹图案提高垂直条纹采样频率;然后构建并训练了结合残差网络与UNet架构的深度神经网络 (DNN),在最大限度保留物体细节的同时平滑拍摄的条纹图像。同时,还提出了训练数据集构建方法,通过离焦投影与相移算法结合生成网络输出的高质量真值。实验结果表明,采用提出的方法,相位测量精度显著提高,同时很好保留了被测物体的细节。

       

      Abstract:
      Objective Phase measurement profilometry (PMP) has emerged as a key optical technique for acquiring three-dimensional (3D) surface information. It has been successfully adopted in fields such as precision manufacturing, biomedical imaging, robotics, security inspection, and digital entertainment. Recently, the demand for high-resolution optical 3D measurement has increased rapidly. In the high-resolution PMP system with digital projector, the measurement error caused by pixel matching between projector and camera becomes obvious. In ultra-high-resolution PMP systems with a high projector-to-camera pixel size ratio (PPSR), fringe patterns projected by diamond pixel digital light processing (DLP) projectors exhibit prominent zigzag edges. Furthermore, the sampling frequency of vertical fringes is only half that of horizontal fringes, resulting in a significant decline in phase measurement accuracy and becoming a key bottleneck restricting system performance. Addressing this critical issue requires a comprehensive solution that mitigates both pixel dimension induced and discrete sampling errors.
      Methods A high-precision measurement method combining pixel remapping and deep learning was proposed to overcome these challenges. Firstly, by analyzing the transmission characteristics of fringes in the PMP system and the pixel mapping law of diamond pixel projectors, a pixel remapping scheme for column-interlaced diamond linear fringes (CIDLF) was designed. This scheme rearranges diamond pixels into column-interleaved straight-line arrangements, doubling the sampling frequency of vertical fringes while effectively alleviating the zigzag phenomenon on fringe edges. Unlike the traditional diamond zigzagged fringe (DZF) and rectangle linear fringe (RLF), CIDLF maintains consistent fringe width between design and projection, ensuring better sinusoidality of projected fringes. Secondly, a ResUNet deep neural network integrating residual network and UNet architectures was constructed to smooth the captured fringes. The UNet component enables spatial downsampling and upsampling for high-level feature extraction and detail recovery through skip connections, which fuse low-level pixel information with high-level semantic features. Residual modules incorporated into both encoder and decoder paths alleviate gradient vanishing and overfitting during deep network training, enhancing the network’s ability to preserve fine object details while smoothing fringes. The network’s basic convolution block (BasicConv) consists of 3×3 convolution, batch normalization, dropout regularization, and rectified linear unit activation, ensuring efficient feature learning and noise suppression. To ensure reliable network training, a high-quality dataset construction method combining defocused projection and the 32-step phase-shifting algorithm was developed. The focused fringes and defocused fringes corresponding to the focused one are captured. Then, parameters of fringes including background, amplitude and phase are obtained with 32-step phase-shifting algorithm and the averages of the background and amplitude of the two types of fringes are worked out. A high contrast smooth fringe is generated using the above parameters as the ground truth. The dataset included 19200 fringe images for training, 400 for validation, and 400 for testing, covering fringe patterns with periods of 8, 12, and 16 pixels. The ResUNet was trained using stochastic gradient descent (SGD) optimization with a mean squared error (MSE) loss function. Training spanned 100 epochs with a batch size of 8; the initial learning rate of 5×10−1 was updated to 5×10−2 after 20 epochs, resulting in a final training MSE of 1.0128×10−5 and a validation MSE of 5.4261×10−5.
      Results and Discussions An experimental system composed of a LightCrafter 4500 projector (1140 pixel×912 pixel) and a Bonito CL-400C camera (2320 pixel×1736 pixel) with a Nikon zoom lens was established to verify the method. PPSR was adjusted by modifying the camera lens focal length, covering values from 0.5 to 85. Experimental results demonstrated that as PPSR increased, the pixel structure of the projector in fringe patterns became more distinct, leading to more prominent regular distributions of phase measurement errors. For traditional DZF, the phase error reached 0.3013 rad in the focused state, while CIDLF achieved a significantly lower error of 0.0898 rad, outperforming both DZF and RLF across all focusing conditions. When combined with ResUNet processing, the phase measurement error of CIDLF under the 4-step phase-shifting algorithm was reduced to 0.0225 rad, approaching the 0.0223 rad error of the 32-step algorithm. This performance is particularly valuable for high-speed 3D measurement applications that require fewer phase shifts. In measurements of step-shaped objects and objects with step reflectivity, CIDLF processed by ResUNet showed phase curves nearly identical to those of defocused CIDLF, with steeper edge transitions and smoother reflectivity boundary regions compared to processed DZF. For complex objects, the proposed method maintained low phase measurement errors even in intricate regions, outperforming defocused projection and DZF processing. The method effectively balances fringe smoothness and detail preservation, resolving the trade-off between error reduction and information retention in high-PPSR systems.
      Conclusions This method effectively resolves the measurement error issues caused by diamond pixel DLP projectors under high PPSR conditions. By combining pixel remapping to optimize fringe sampling and ResUNet to refine captured images, it achieves both high measurement accuracy and detail preservation. The proposed dataset construction method ensures robust network training, enabling the system to perform well with fewer phase shifts. This work provides a feasible approach for achieving high-precision, high-resolution, and high-speed 3D measurement. Future research will focus on adapting the network to varying system configurations and expanding its applicability to dynamic measurement scenarios.