To increase the generalization capability of VQA systems, many recent studies have tried to de-bias spurious language or vision associations that shortcut the question or image to the answer. Despite these efforts, the literature fails to address the confounding effect of vision and language simultaneously. As a result, when they reduce bias learned from one modality, they usually increase bias from the other. In this paper, we first model a confounding effect that causes language and vision bias simultaneously, then propose a counterfactual inference to remove the influence of this effect. The model trained in this strategy can concurrently and efficiently reduce vision and language bias. To the best of our knowledge, this is the first work to reduce biases resulting from confounding effects of vision and language in VQA, leveraging causal explain-away relations. We accompany our method with an explain-away strategy, pushing the accuracy of the questions with numerical answers results compared to existing methods that have been an open problem. The proposed method outperforms the state-of-the-art methods in VQA-CP v2 datasets. R2: Providing brief insights into the experimental setup and results would add valuable context for readers. In response to R2, we released the code and documentation for the implementation as follows. Our codes are available at https://github.com/ali-vosoughi/PW-VQA.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485245 | PMC |
http://dx.doi.org/10.1109/tmm.2024.3380259 | DOI Listing |
Anim Front
December 2024
Department of Animal and Dairy Sciences, University of Wisconsin-Madison, Madison, WI 53703, USA.
Cureus
December 2024
Department of Ophthalmology, Kalinga Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND.
Objective The objective of this study is to compare patient-reported outcome measures using the Catquest Questionnaire in patients undergoing phacoemulsification (Phaco) versus manual small-incision cataract surgery (MSICS). Materials and methods This descriptive cross-sectional study included patients aged 40 years and older with cataracts classified as nuclear sclerosis (NS) grade 3 or higher. Demographic details were recorded and a comprehensive ophthalmological exam was done.
View Article and Find Full Text PDFMany artificial neural networks (ANNs) trained with ecologically plausible objectives on naturalistic data align with behavior and neural representations in biological systems. Here, we show that this alignment is a consequence of convergence onto the same representations by high-performing ANNs and by brains. We developed a method to identify stimuli that systematically vary the degree of inter-model representation agreement.
View Article and Find Full Text PDFSci Rep
January 2025
Integrated Intelligence Research Section, Electronics and Telecommunications Research Institute, Daejeon, 34129, Republic of Korea.
Alzheimer's disease (AD), a progressive neurodegenerative condition, notably impacts cognitive functions and daily activity. One method of detecting dementia involves a task where participants describe a given picture, and extensive research has been conducted using the participants' speech and transcribed text. However, very few studies have explored the modality of the image itself.
View Article and Find Full Text PDFVisual attribution in medical imaging seeks to make evident the diagnostically-relevant components of a medical image, in contrast to the more common detection of diseased tissue deployed in standard machine vision pipelines (which are less straightforwardly interpretable/explainable to clinicians). We here present a novel generative visual attribution technique, one that leverages latent diffusion models in combination with domain-specific large language models, in order to generate normal counterparts of abnormal images. The discrepancy between the two hence gives rise to a mapping indicating the diagnostically-relevant image components.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!