Novel stimuli were presented in a runway where rats run for food. Subjects in a pre-exposure condition (n = 6) were exposed to visual and auditory stimulation and then to an olfactory stimulus; subjects not preexposed (n = 6) only received the olfactory stimulus. Reaction to the odor was lower in pre-exposed subjects. This result suggests cross-modal generalization effects in at least one kind of habituation of the rat.

Download full-text PDF

Source
http://dx.doi.org/10.2466/pms.1980.50.3c.1345DOI Listing

Publication Analysis

Top Keywords

cross-modal generalization
8
habituation rat
8
olfactory stimulus
8
generalization habituation
4
rat novel
4
novel stimuli
4
stimuli presented
4
presented runway
4
runway rats
4
rats food
4

Similar Publications

Audiovisual associative memory and audiovisual integration involve common behavioral processing components and significantly overlap in their neural mechanisms. This suggests that training on audiovisual associative memory may have the potential to improve audiovisual integration. The current study tested this hypothesis by applying a 2 (group: audiovisual training group, unimodal control group) * 2 (time: pretest, posttest) design.

View Article and Find Full Text PDF

Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking.

View Article and Find Full Text PDF

Visual-language models (VLMs) excel in cross-modal reasoning by synthesizing visual and linguistic features. Recent VLMs use prompt learning for fine-tuning, allowing adaptation to various downstream tasks. TCP applies class-aware prompt tuning to improve VLMs generalization, yet its reliance on fixed text templates as prior knowledge can limit adaptability to fine-grained category distinctions.

View Article and Find Full Text PDF

Temporal Multi-Modal Knowledge Graphs (TMMKGs) can be regarded as a synthesis of Temporal Knowledge Graphs (TKGs) and Multi-Modal Knowledge Graphs (MMKGs), combining the characteristics of both. TMMKGs can effectively model dynamic real-world phenomena, particularly in scenarios involving multiple heterogeneous information sources and time series characteristics, such as e-commerce websites, scene recording data, and intelligent transportation systems. We propose a Temporal Multi-Modal Knowledge Graph Generation (TMMKGG) method that can automatically construct TMMKGs, aiming to reduce construction costs.

View Article and Find Full Text PDF

Purpose: To develop a deep learning (DL) model for obstructive sleep apnea (OSA) detection and severity assessment and provide a new approach for convenient, economical, and accurate disease detection.

Methods: Considering medical reliability and acquisition simplicity, we used electrocardiogram (ECG) and oxygen saturation (SpO) signals to develop a multimodal signal fusion multiscale Transformer model for OSA detection and severity assessment. The proposed model comprises signal preprocessing, feature extraction, cross-modal interaction, and classification modules.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!