Panoptic Scene Graph (PSG) is a challenging task in Scene Graph Generation (SGG) that aims to create a more comprehensive scene graph representation using panoptic segmentation instead of boxes. Compared to SGG, PSG has several challenging problems: pixel-level segment outputs and full relationship exploration (It also considers thing and stuff relation). Thus, current PSG methods have limited performance, which hinders downstream tasks or applications. This work aims to design a novel and strong baseline for PSG. To achieve that, we first conduct an in-depth analysis to identify the bottleneck of the current PSG models, finding that inter-object pair-wise recall is a crucial factor that was ignored by previous PSG methods. Based on this and the recent query-based frameworks, we present a novel framework: Pair then Relation (Pair-Net), which uses a Pair Proposal Network (PPN) to learn and filter sparse pair-wise relationships between subjects and objects. Moreover, we also observed the sparse nature of object pairs for both. Motivated by this, we design a lightweight Matrix Learner within the PPN, which directly learns pair-wised relationships for pair proposal generation. Through extensive ablation and analysis, our approach significantly improves upon leveraging the segmenter solid baseline. Notably, our method achieves over 10% absolute gains compared to our baseline, PSGFormer.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2024.3442301DOI Listing

Publication Analysis

Top Keywords

scene graph
16
pair relation
8
relation pair-net
8
panoptic scene
8
graph generation
8
psg challenging
8
current psg
8
psg methods
8
pair proposal
8
psg
6

Similar Publications

Visual semantic decoding aims to extract perceived semantic information from the visual responses of the human brain and convert it into interpretable semantic labels. Although significant progress has been made in semantic decoding across individual visual cortices, studies on the semantic decoding of the ventral and dorsal cortical visual pathways remain limited. This study proposed a graph neural network (GNN)-based semantic decoding model on a natural scene dataset (NSD) to investigate the decoding differences between the dorsal and ventral pathways in process various parts of speech, including verbs, nouns, and adjectives.

View Article and Find Full Text PDF

DICCR: Double-gated intervention and confounder causal reasoning for vision-language navigation.

Neural Netw

December 2024

School of Computer and Electronic Information, Guangxi University, University Road, Nanning, 530004, Guangxi, China. Electronic address:

Vision-language navigation (VLN) is a challenging task that requires agents to capture the correlation between different modalities from redundant information according to instructions, and then make sequential decisions on visual scenes and text instructions in the action space. Recent research has focused on extracting visual features and enhancing text knowledge, ignoring the potential bias in multi-modal data and the problem of spurious correlations between vision and text. Therefore, this paper studies the relationship structure between multi-modal data from the perspective of causality and weakens the potential correlation between different modalities through cross-modal causality reasoning.

View Article and Find Full Text PDF

Visual question generation involves the generation of meaningful questions about an image. Although we have made significant progress in automatically generating a single high-quality question related to an image, existing methods often ignore the diversity and interpretability of generated questions, which are important for various daily tasks that require clear question sources. In this paper, we propose an explicitly diverse visual question generation model that aims to generate diverse questions based on interpretable question sources.

View Article and Find Full Text PDF

VIIDA and InViDe: computational approaches for generating and evaluating inclusive image paragraphs for the visually impaired.

Disabil Rehabil Assist Technol

December 2024

Department of Informatics, Universidade Federal de Viçosa - UFV, Viçosa, Brazil.

Article Synopsis
  • Existing image description methods for blind or low vision individuals are often inadequate, either oversimplifying visuals into short captions or overwhelming users with lengthy descriptions.
  • VIIDA is introduced as a new procedure to enhance image description specifically for webinar scenes, along with InViDe, a metric for evaluating these descriptions based on accessibility for BLV people.
  • Utilizing advanced tech like a multimodal Visual Question Answering model and Natural Language Processing, VIIDA effectively creates descriptions closely matching image content, while InViDe provides insights into the effectiveness of various methods, fostering further development in Assistive Technologies.
View Article and Find Full Text PDF

Wi-Fi sensing gesture control algorithm based on semi-supervised generative adversarial network.

PeerJ Comput Sci

October 2024

Joint Laboratory for International Cooperation of the Special Optical Fiber and Advanced Communication, Shanghai University, Shanghai, China.

A Wi-Fi-sensing gesture control system for smart homes has been developed based on a theoretical investigation of the Fresnel region sensing model, addressing the need for non-contact gesture control in household environments. The system collects channel state information (CSI) related to gestures from Wi-Fi signals transmitted and received by network cards within a specific area. The collected data undergoes preprocessing to eliminate environmental interference, allowing for the extraction of complete gesture sets.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!