Salient object detection in low-light RGB-T scene via spatial-frequency cues mining.

Neural Netw

School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China. Electronic address:

Published: October 2024

Low-light conditions pose significant challenges to vision tasks, such as salient object detection (SOD), due to insufficient photons. Light-insensitive RGB-T SOD models mitigate the above problems to some extent, but they are limited in performance as they only focus on spatial feature fusion while ignoring the frequency discrepancy. To this end, we propose an RGB-T SOD model by mining spatial-frequency cues, called SFMNet, for low-light scenes. Our SFMNet consists of spatial-frequency feature exploration (SFFE) modules and spatial-frequency feature interaction (SFFI) modules. To be specific, the SFFE module aims to separate spatial-frequency features and adaptively extract high and low-frequency features. Moreover, the SFFI module integrates cross-modality and cross-domain information to capture effective feature representations. By deploying both modules in a top-down pathway, our method generates high-quality saliency predictions. Furthermore, we construct the first low-light RGB-T SOD dataset as a benchmark for evaluating performance. Extensive experiments demonstrate that our SFMNet can achieve higher accuracy than the existing models for low-light scenes.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.106406DOI Listing

Publication Analysis

Top Keywords

rgb-t sod
12
salient object
8
object detection
8
low-light rgb-t
8
spatial-frequency cues
8
low-light scenes
8
spatial-frequency feature
8
low-light
5
spatial-frequency
5
detection low-light
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!