Even though visual and auditory information of 1 and the same event often do not arrive at the sensory receptors at the same time, due to different physical transmission times of the modalities, the brain maintains a unitary perception of the event, at least within a certain range of sensory arrival time differences. The properties of this "temporal window of integration" (TWIN), its recalibration due to task requirements, attention, and other variables, have recently been investigated intensively. Up to now, however, there has been no consistent definition of "temporal window" across different paradigms for measuring its width. Here we propose such a definition based on our TWIN model (Colonius & Diederich, 2004). It applies to judgments of temporal order (or simultaneity) as well as to reaction time (RT) paradigms. Reanalyzing data from Mégevand, Molholm, Nayak, & Foxe (2013) by fitting the TWIN model to data from both paradigms, we confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task. This first step toward a unified concept of TWIN should be a valuable tool in guiding investigations of the neural and cognitive bases of this so-far-somewhat elusive concept.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0038696 | DOI Listing |
Sci Rep
January 2025
Department of Psychology, Bar-Ilan University, 5290002, Ramat-Gan, Israel.
Large individual differences can be observed in studies reporting spectral TOJ. In the present study, we aimed to explore these individual differences and explain them by employing Warren and Ackroff (1976) framework of direct identification of components and their order (direct ICO) and holistic pattern recognition (HPR). In Experiment 1, results from 177 participants replicated the large variance in participants' performance and suggested three response patterns, validated using the K-Means clustering algorithm.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Republic of Korea.
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder.
View Article and Find Full Text PDFAtten Percept Psychophys
January 2025
Department of Psychology, Senshu University, Kawasaki, Japan.
Directional judgments of an arrow became slower when the direction and location were incongruent in a spatial Stroop task (i.e., a standard congruency effect).
View Article and Find Full Text PDFConscious Cogn
December 2024
Department of Business and Marketing, Faculty of Commerce, Kyushu Sangyo University, 3-1 Matsukadai 2-Chome, Higashi-ku, Fukuoka 813-8503, Japan. Electronic address:
Sci Rep
December 2024
School of Strength and Conditioning Training, Beijing Sport University, Beijing, China.
The aim of the study was to investigate the impacts of four weeks of stroboscopic vision training (SVT) and four weeks of temporal feedback training (TFT) on elite curling athletes' duration judgment, as well as stone delivery performance (delivery speed control and accuracy). Thirty national-level curling athletes were selected as participants and randomly assigned to either the SVT group (wearing stroboscopic glasses: the strobe frequencies increased weekly from Level 1 to Level 4.), the TFT group (using a timing system to provide feedback on stone delivery time), or a control group.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!