New website getting online, testing
    • 摘要: 针对基于边界框检测的单阶段YOLACT算法缺少对感兴趣区域进行定位提取,且两个边界框存在相互重叠而难以区分的问题,基于改进的YOLACTR算法,提出一种无锚框实例分割方法,将掩码生成解耦成特征学习和卷积核学习,利用特征聚合网络生成掩码特征,将位置信息添加到特征图,采用多层Transformer和双向注意力来获得动态卷积核。实验结果表明,该方法在MS COCO公共数据集的掩码精度(AP)达到35.2%,相对于YOLACT算法,掩码精度提升25.7%,小目标检测精度提升37.1%,中等目标检测精度提升25.8%,大目标检测精度提升21.9%。相较YOLACT、Mask R-CNN、SOLO等方法,所提算法在分割精度和边缘细节保留方面均具有明显优势,特别在重叠物体的分割和小目标检测中表现更为出色,有效解决传统方法在实例边界重叠区域的错误分割问题。

       

      Abstract: Aiming at the problem that the single-stage YOLACT algorithm based on bounding box detection lacks the location and extraction of the region of interest, and the issue that two bounding boxes overlap and are difficult to distinguish, this paper proposes an anchor-free instance segmentation method based on the improved YOLACTR algorithm. The mask generation is decoupled into feature learning and convolution kernel learning, and the feature aggregation network is used to generate mask features. By adding position information to the feature map, multi-layer transformer and two-way attention are used to obtain dynamic convolution kernels. The experimental results show that this method achieves a mask accuracy (AP) of 35.2% on the MS COCO public dataset. Compared with the YOLACT algorithm, this method improves the mask accuracy by 25.7%, the small target detection accuracy by 37.1%, the medium target detection accuracy by 25.8%, and the large target detection accuracy by 21.9%. Compared with YOLACT, Mask R-CNN, SOLO, and other methods, our algorithm shows significant advantages in segmentation accuracy and edge detail preservation, especially excelling in overlapping object segmentation and small target detection, effectively solving the problem of incorrect segmentation in instance boundary overlap regions that traditional methods face.