New website getting online, testing
    • 摘要: 随着卷积神经网络(convolutional neural networks, CNN)和 Transformer 模型的快速发展,它们在遥感图像超分辨率(remote-sensing image super-resolution, RSISR)重建任务中取得了显著进展。然而,现有方法在处理不同尺度物体特征时表现不足,同时未能充分挖掘通道与空间维度间的隐性关联,限制了重建性能的进一步提升。针对上述问题,本文提出了一种自适应双域注意力网络(adaptive dual-domain attention network, ADAN)。该网络通过融合通道域与空间域的自注意力信息,增强了特征提取能力;设计的多尺度前馈网络(multi-scale feed-forward network, MSFFN)能够捕捉丰富的多尺度特征;结合新颖的门控卷积模块,进一步提升了局部特征表达能力。基于 U 型结构的网络骨干设计,实现了高效的多层次特征融合。在多个公开遥感数据集上的实验结果表明,所提出的 ADAN 方法在定量指标(如 PSNR 和 SSIM)以及视觉质量方面均显著优于现有的先进算法,充分验证了其有效性与先进性,为遥感图像超分辨率重建提供了新的研究思路和技术路径。

       

      Abstract: With the rapid development of convolutional neural networks (CNNs) and Transformer models, significant progress has been made in remote sensing image super-resolution (RSSR) reconstruction tasks. However, existing methods have limitations in effectively handling multi-scale object features and fail to fully explore the implicit correlations between channel and spatial dimensions, thus restricting further improvements in reconstruction performance. To address these issues, this paper proposes an adaptive dual-domain attention network (ADAN). The network integrates self-attention information from both channel and spatial domains to enhance feature extraction capabilities. A multi-scale feed-forward network (MSFFN) is designed to capture rich multi-scale features. At the same time, an innovative gated convolutional module is introduced to further enhance the representation of local features. The network adopts a U-shaped backbone structure, enabling efficient multi-level feature fusion. Experimental results on multiple publicly available remote sensing datasets show that the proposed ADAN method significantly outperforms state-of-the-art approaches in terms of quantitative metrics (e.g., PSNR and SSIM) and visual quality. These results validate the effectiveness and superiority of ADAN, providing novel insights and technical approaches for remote sensing image super-resolution reconstruction.