New website getting online, testing
    • 摘要: 随着低空无人机的广泛应用,实时检测此类低慢小目标对维护公共安全至关重要。传统相机以固定曝光时间成像获得图像帧,在光照变化时难以自适应,导致在强光照等场景下存在探测盲区。事件相机作为一种新型的神经形态传感器,逐像素感知外部亮度变化差异,在复杂光照条件下依然可以生成高频稀疏的事件数据。针对基于图像的检测方法难以适应事件相机稀疏不规则数据的难题,本文将二维目标检测任务建模为三维时空点云中的语义分割任务,提出了一种基于双视图融合的无人机目标分割模型。基于事件相机采集真实的无人机探测数据集,实验结果表明,所提方法在保证实时性的前提下具有最优的检测性能,实现了无人机目标的稳定检测。

       

      Abstract: With the widespread application of low-altitude drones, real-time detection of such slow and small targets is crucial for maintaining public safety. Traditional cameras capture image frames with a fixed exposure time, which makes it challenging to adapt to changes in lighting conditions, resulting in the detection of blind spots in intense light and other scenes. Event cameras, as a new type of neuromorphic sensor, sense differences in external brightness changes pixel by pixel. They can still generate high-frequency sparse event data under complex lighting conditions. In response to the difficulty of adapting image-based detection methods to sparse and irregular data from event cameras, this paper models the two-dimensional object detection task as a semantic segmentation task in a three-dimensional spatiotemporal point cloud and proposes a drone object segmentation model based on dual-view fusion. Based on the event camera collecting accurate drone detection datasets, the experimental results show that the proposed method has the optimal detection performance while ensuring real-time performance, achieving stable detection of drone targets.