New website getting online, testing
    • 摘要: 针对昏暗场景中背景复杂导致目标边缘模糊、小目标易被遮挡和误检漏检的问题,提出一种改进的YOLOv8s算法—YOLOv8-GAIS。采用四头自适应多维特征融合(four-head adaptive multi-dimensional feature fusion,FAMFF)策略,过滤空间中的冲突信息。添加小目标检测头,解决航拍视角下目标尺度变化大问题。引入空间增强注意力机制(spatially enhanced attention mechanism,SEAM),增强网络对昏暗和遮挡部分的捕捉能力。采用更关注核心部分的InnerSIoU损失函数,提升被遮挡目标的检测性能。采集实地场景扩充VisDrone2021数据集,应用Gamma和SAHI (slicing aided hyper inference)算法进行预处理,平衡低照度场景中不同目标种类的数量,优化模型的泛化能力和检测精度。通过对比实验,改进模型相对于初始模型,参数量减少1.53 MB,mAP50提高6.9%,mAP50-95增加5.6%,模型计算量减少7.2 GFLOPs。在天津市津南区大沽南路进行实地实验,探究无人机(UAV)最适宜的图像采集高度,实验结果表明,当飞行高度设置为60 m时,该模型的检测精度mAP50最高为77.8%。

       

      Abstract: To address the issue of complex backgrounds in dim scenes, which cause object edge blurring and obscure small objects, leading to misdetection and omission, an improved YOLOv8-GAIS algorithm is proposed. The FAMFF (four-head adaptive multi-dimensional feature fusion) strategy is designed to achieve spatial filtering of conflicting information. A small object detection head is incorporated to address the issue of large object scale variation in aerial views. The SEAM (spatially enhanced attention mechanism) is introduced to enhance the network's attention and capture ability for occluded parts in low illumination situations. The InnerSIoU loss function is adopted to emphasize the core regions, thereby improving the detection performance of occluded objects. Field scenes are collected to expand the VisDrone2021 dataset, and the Gamma and SAHI (slicing aided hyper inference) algorithms are applied for preprocessing. This helps balance the distribution of different object types in low-illumination scenarios, optimizing the model's generalization ability and detection accuracy. Comparative experiments show that the improved model reduces the number of parameters by 1.53 MB, and increases mAP50 by 6.9%, mAP50-95 by 5.6%, and model computation by 7.2 GFLOPs compared to the baseline model. In addition, field experiments were conducted in Dagu South Road, Jinnan District, Tianjin City, China, to determine the optimal altitude for image acquisition by UAVs. The results show that, at a flight altitude of 60 m, the model achieves the detection accuracy of 77.8% mAP50.