Saccadic eye movements successively project the saccade target on two retinal locations: a peripheral one before the saccade, and the fovea after the saccade. Typically, performance in discriminating stimulus features changes between these two projections is very poor. However, a short (∼200 ms) blanking of the target upon saccade onset drastically improves performance, demonstrating that a precise signal of the peripheral projection is retained during the saccade.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
October 2023
Perceptual learning is the ability to enhance perception through practice. The hallmark of perceptual learning is its specificity for the trained location and stimulus features, such as orientation. For example, training in discriminating a grating's orientation improves performance only at the trained location but not in other untrained locations.
View Article and Find Full Text PDFVision scientists have tried to classify illusions for more than a century. For example, some studies suggested that there is a unique common factor for all visual illusions. Other studies proposed that there are several subclasses of illusions, such as illusions of linear extent or distortions.
View Article and Find Full Text PDFAcross saccadic eye movements, the visual system receives two successive static images corresponding to the pre- and the postsaccadic projections of the visual field on the retina. The existence of a mechanism integrating the content of these images is today still a matter of debate. Here, we studied the transfer of a visual feature across saccades using a blanking paradigm.
View Article and Find Full Text PDFThe content and nature of transsaccadic memory are still a matter of debate. Brief postsaccadic target blanking was demonstrated to recover transsaccadic memory and defeat saccadic suppression of displacement. We examined whether blanking would also support transsaccadic transfer of detailed form information.
View Article and Find Full Text PDFVision scientists have attempted to classify visual illusions according to certain aspects, such as brightness or spatial features. For example, Piaget proposed that visual illusion magnitudes either decrease or increase with age. Subsequently, it was suggested that illusions are segregated according to their context: real-world contexts enhance and abstract contexts inhibit illusion magnitudes with age.
View Article and Find Full Text PDFCommon factors are ubiquitous. For example, there is a common factor, g, for intelligence. In vision, there is much weaker evidence for such common factors.
View Article and Find Full Text PDFPerceptual learning is usually feature-specific. Recently, we showed that perceptual learning is even specific for the motor response type. In a three-line bisection task, participants indicated whether the central line was offset either to the left or right by pressing a left or a right button, respectively.
View Article and Find Full Text PDFRecent studies suggest that the accuracy of perceptual judgments can be influenced by the perceived illusory size of a stimulus, with judgments being more accurate for increased illusory size. This phenomenon seems consistent with recent neuroscientific findings that representations in early visual areas reflect the perceived (illusory) size of stimuli rather than the physical size. We further explored this idea with the moon illusion, in which the moon appears larger when it is close to the horizon and smaller when it is higher in the sky.
View Article and Find Full Text PDFThere seems to be no common factor for visual perception, i.e., performance in visual tasks correlates only weakly with each other.
View Article and Find Full Text PDFDespite well-established sex differences for cognition, audition, and somatosensation, few studies have investigated whether there are also sex differences in visual perception. We report the results of fifteen perceptual measures (such as visual acuity, visual backward masking, contrast detection threshold or motion detection) for a cohort of over 800 participants. On six of the fifteen tests, males significantly outperformed females.
View Article and Find Full Text PDFPerceptual learning can occur for a feature irrelevant to the training task, when it is sub-threshold and outside of the focus of attention (task-irrelevant perceptual learning, TIPL); however, TIPL does not occur when the task-irrelevant feature is supra-threshold. Here, we asked the question whether TIPL occurs when the task-irrelevant feature is sub-threshold but within the focus of spatial attention. We tested participants in three different discrimination tasks performed on a 3-dot stimulus: a horizontal Vernier task and a vertical bisection task (during pre- and post-training sessions), and a luminance task (during training).
View Article and Find Full Text PDFPerceptual learning is usually assumed to occur within sensory areas or when sensory evidence is mapped onto decisions. Subsequent procedural and motor processes, involved in most perceptual learning experiments, are thought to play no role in the learning process. Here, we show that this is not the case.
View Article and Find Full Text PDFWhat is new in perceptual learning? In the early days of research, specificity was the hallmark of perceptual learning; that is, improvements following training were limited to the trained stimulus features. For example, training with a stimulus improves performance for this stimulus but not for the same stimulus when rotated by 90° (Ball & Sekuler, 1987; Spang, Grimsen, Herzog, & Fahle, 2010). Because of this specificity, learning was thought to be mediated by neural changes at the early stages of vision.
View Article and Find Full Text PDFIn cognition, audition, and somatosensation, performance strongly correlates between different paradigms, which suggests the existence of common factors. In contrast, visual performance in seemingly very similar tasks, such as visual and bisection acuity, are hardly related, i.e.
View Article and Find Full Text PDFIn most models of vision, a stimulus is processed in a series of dedicated visual areas, leading to categorization of this stimulus, and possible decision, which subsequently may be mapped onto a motor-response. In these models, stimulus processing is thought to be independent of the response modality. However, in theories of event coding, common coding, and sensorimotor contingency, stimuli may be very specifically mapped onto certain motor-responses.
View Article and Find Full Text PDFPerceptual learning is usually thought to be exclusively driven by the stimuli presented during training (and the underlying synaptic learning rules). In some way, we are slaves of our visual experiences. However, learning can occur even when no stimuli are presented at all.
View Article and Find Full Text PDFActive sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the "where" and the "when" of a sound, respectively.
View Article and Find Full Text PDFIn typical perceptual learning experiments, one stimulus type (e.g., a bisection stimulus offset either to the left or right) is presented per trial.
View Article and Find Full Text PDF