Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background. Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes. As soon as the participant's gaze reached the target face, a new face was displayed in a different and random location.
View Article and Find Full Text PDFA number of fMRI studies have provided support for the existence of multiple concept representations in areas of the brain such as the anterior temporal lobe (ATL) and inferior parietal lobule (IPL). However, the interaction among different conceptual representations remains unclear. To better understand the dynamics of how the brain extracts meaning from sensory stimuli, we conducted a human high-density electroencephalography (EEG) study in which we first trained participants to associate pseudowords with various animal and tool concepts.
View Article and Find Full Text PDFAccurate stimulus onset timing is critical to almost all behavioral research. Auditory, visual, or manual response time stimulus onsets are typically sent through wires to various machines that record data such as: eye gaze positions, electroencephalography, stereo electroencephalography, and electrocorticography. These stimulus onsets are collated and analyzed according to experimental condition.
View Article and Find Full Text PDFThe human visual system can detect objects in streams of rapidly presented images at presentation rates of 70 Hz and beyond. Yet, target detection is often impaired when multiple targets are presented in quick temporal succession. Here, we provide evidence for the hypothesis that such impairments can arise from interference between "top-down" feedback signals and the initial "bottom-up" feedforward processing of the second target.
View Article and Find Full Text PDFWhile several studies have shown human subjects' impressive ability to detect faces in individual images in paced settings (Crouzet et al., 2010), we here report the details of an eye movement dataset in which subjects rapidly and continuously targeted single faces embedded in different scenes at rates approaching six face targets each second (including blinks and eye movement times). In this paper, we describe details of a large publicly available eye movement dataset of this new psychophysical paradigm (Martin et al.
View Article and Find Full Text PDFA number of studies have shown human subjects' impressive ability to detect faces in individual images, with saccade reaction times starting as fast as 100 ms after stimulus onset. Here, we report evidence that humans can rapidly and continuously saccade towards single faces embedded in different scenes at rates approaching 6 faces/scenes each second (including blinks and eye movement times). These observations are impressive, given that humans usually make no more than 2 to 5 saccades per second when searching a single scene with eye movements.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
August 2013
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model.
View Article and Find Full Text PDFA hallmark of human cognition is the ability to rapidly assign meaning to sensory stimuli. It has been suggested that this fast visual object categorization ability is accomplished by a feedforward processing hierarchy consisting of shape-selective neurons in occipito-temporal cortex that feed into task circuits in frontal cortex computing conceptual category membership. We performed an EEG rapid adaptation study to test this hypothesis.
View Article and Find Full Text PDFSelf-organization, a process by which the internal organization of a system changes without supervision, has been proposed as a possible basis for multisensory enhancement (MSE) in the superior colliculus (Anastasio and Patton, 2003). We simplify and extend these results by presenting a simulation using traditional self-organizing maps, intended to understand and simulate MSE as it may generally occur throughout the central nervous system. This simulation of MSE: (1) uses a standard unsupervised competitive learning algorithm, (2) learns from artificially generated activation levels corresponding to driven and spontaneous stimuli from separate and combined input channels, (3) uses a sigmoidal transfer function to generate quantifiable responses to separate inputs, (4) enhances the responses when those same inputs are combined, (5) obeys the inverse effectiveness principle of multisensory integration, and (6) can topographically congregate MSE in a manner similar to that seen in cortex.
View Article and Find Full Text PDF