A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Middle-Level Feature Fusion for Lightweight RGB-D Salient Object Detection. | LitMetric

Most existing RGB-D salient object detection (SOD) models adopt a two-stream structure to extract the information from the input RGB and depth images. Since they use two subnetworks for unimodal feature extraction and multiple multi-modal feature fusion modules for extracting cross-modal complementary information, these models require a huge number of parameters, thus hindering their real-life applications. To remedy this situation, we propose a novel middle-level feature fusion structure that allows to design a lightweight RGB-D SOD model. Specifically, the proposed structure first employs two shallow subnetworks to extract low- and middle-level unimodal RGB and depth features, respectively. Afterward, instead of integrating middle-level unimodal features multiple times at different layers, we just fuse them once via a specially designed fusion module. On top of that, high-level multi-modal semantic features are further extracted for final salient object detection via an additional subnetwork. This will greatly reduce the network's parameters. Moreover, to compensate for the performance loss due to parameter deduction, a relation-aware multi-modal feature fusion module is specially designed to effectively capture the cross-modal complementary information during the fusion of middle-level multi-modal features. By enabling the feature-level and decision-level information to interact, we maximize the usage of the fused cross-modal middle-level features and the extracted cross-modal high-level features for saliency prediction. Experimental results on several benchmark datasets verify the effectiveness and superiority of the proposed method over some state-of-the-art methods. Remarkably, our proposed model has only 3.9M parameters and runs at 33 FPS.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2022.3214092DOI Listing

Publication Analysis

Top Keywords

feature fusion
16
salient object
12
object detection
12
middle-level feature
8
lightweight rgb-d
8
rgb-d salient
8
rgb depth
8
multi-modal feature
8
cross-modal complementary
8
middle-level unimodal
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!