Recent research that assessed spatial judgments about multisensory stimuli suggests that humans integrate multisensory inputs in a statistically optimal manner by weighting each input by its normalized reciprocal variance. Is integration similarly optimal when humans judge the temporal properties of bimodal stimuli? Twenty-four participants performed temporal order judgments (TOJs) about 2 spatially separated stimuli. Stimuli were auditory, vibrotactile, or both. The temporal profiles of vibrotactile stimuli were manipulated to produce 3 levels of precision for TOJs. In bimodal conditions, the asynchrony between the 2 unimodal stimuli that comprised a bimodal stimulus was manipulated to determine the weight given to touch. Bimodal performance on 2 measures-judgment uncertainty and tactile weight-was predicted with unimodal data. A model relying exclusively on audition was rejected on the basis of both measures. A second model that selected the best input on each trial did not predict the reduced judgment uncertainty observed in bimodal trials. Only the optimal maximum-likelihood-estimation model predicted both judgment uncertainties and weights; the model's validity is extended to TOJs. Alternatives for modeling the process of event sequencing based on integrated multisensory inputs are discussed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1037/a0015021 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!