New website getting online, testing
    • 摘要: 光场深度估计是光场处理和应用领域的重要科学问题。然而,现有研究忽略了光场视图间的几何遮挡关系。本文通过对不同视图间遮挡的分析,提出了一种基于子光场遮挡融合的无监督光场深度估计方法。该方法首先采用一种有效的子光场划分机制来考虑不同角度位置处的深度关系,具体是将光场子孔径阵列的主副对角线上的视图按左上、右上、左下、右下分为四个子光场。然后,利用空间金字塔池化特征提取和U-Net网络来估计子光场深度。最后,设计了一种遮挡融合策略来融合所有子光场深度以得到最终深度,该策略对在遮挡区域具有更高精度的子光场深度图赋予更大的权重,从而减小遮挡影响。此外,引入了加权空间一致性损失和角度一致性损失以约束网络训练并增强鲁棒性。实验结果表明,所提出方法在定量指标和定性比较上展现出了良好的性能。

       

      Abstract: Light field depth estimation is an important scientific problem of light field processing and applications. However, the existing studies ignore the geometric occlusion relationship among views in the light field. By analyzing the occlusion among different views, an unsupervised light field depth estimation method based on sub-light field occlusion fusion is proposed. The proposed method first adopts an effective sub-light field division mechanism to consider the depth relationship at different angular positions. Specifically, the views on the primary and secondary diagonals of the light field sub-aperture arrays are divided into four sub-light fields, i.e., top-left, top-right, bottom-left, and bottom-right. Then, a spatial pyramid pooling feature extraction and a U-Net network are leveraged to estimate the depths of the sub-light fields. Finally, an occlusion fusion strategy is designed to fuse all sub-light field depths to obtain the final depth. This strategy assigns greater weights to the sub-light field depth with higher accuracy in the occlusion region, thus reducing the occlusion effect. In addition, a weighted spatial and an angular consistency loss are employed to constrain network training and enhance robustness. Experimental results demonstrate that the proposed method exhibits favorable performance in both quantitative metrics and qualitative comparisons.