Publications by authors named "Ronnie B Wilbur"

The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling.

View Article and Find Full Text PDF

Introduction: Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood.

Methods: This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal.

View Article and Find Full Text PDF

A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound.

View Article and Find Full Text PDF

Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event's underlying representation and its syntactic encoding, such that-for example-the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e.

View Article and Find Full Text PDF

Acquisition of natural language has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue, because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later - a situation that is drastically different from that of spoken language acquisition.

View Article and Find Full Text PDF

Neuroimaging experiments in general, and EEG experiments in particular, must take care to avoid confounds. A recent TPAMI paper uses data that suffers from a serious previously reported confound. We demonstrate that their new model and analysis methods do not remedy this confound, and therefore that their claims of high accuracy and neuroscience relevance are invalid.

View Article and Find Full Text PDF

A recent paper [31] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [11, 18, 20, 24, 25, 30, 34], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed.

View Article and Find Full Text PDF

Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015).

View Article and Find Full Text PDF

One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown.

View Article and Find Full Text PDF

To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g.

View Article and Find Full Text PDF

Previous studies of Austrian Sign Language (ÖGS) word-order variations have demonstrated the human processing system's tendency to interpret a sentence-initial (case-) ambiguous argument as the subject of the clause ("subject preference"). The electroencephalogram study motivating the current report revealed earlier reanalysis effects for object-subject compared to subject-object sentences, in particular, before the start of the movement of the agreement marking sign. The effects were bound to time points prior to when both arguments were referenced in space and/or the transitional hand movement prior to producing the disambiguating sign.

View Article and Find Full Text PDF

The question of apparent discrepancies in short-term memory capacity for sign language and speech has long presented difficulties for the models of verbal working memory. While short-term memory (STM) capacity for spoken language spans up to 7 ± 2 items, the verbal working memory capacity for sign languages appears to be lower at 5 ± 2. The assumption that both auditory and visual communication (sign language) rely on the same memory buffers led to the claims of impairment of STM buffers in sign language users.

View Article and Find Full Text PDF

Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first strategy is universal and not dependent on the language modality (spoken vs. signed).

View Article and Find Full Text PDF

The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH).

View Article and Find Full Text PDF

Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language.

View Article and Find Full Text PDF

There has been a scarcity of studies exploring the influence of students' American Sign Language (ASL) proficiency on their academic achievement in ASL/English bilingual programs. The aim of this study was to determine the effects of ASL proficiency on reading comprehension skills and academic achievement of 85 deaf or hard-of-hearing signing students. Two subgroups, differing in ASL proficiency, were compared on the Northwest Evaluation Association Measures of Academic Progress and the reading comprehension subtest of the Stanford Achievement Test, 10th edition.

View Article and Find Full Text PDF

Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain's anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia).

View Article and Find Full Text PDF

To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined.

View Article and Find Full Text PDF

Purpose: Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g.

View Article and Find Full Text PDF

This article presents an experimental investigation of kinematics of verb sign production in American Sign Language (ASL) using motion capture data. The results confirm that event structure differences in the meaning of the verbs are reflected in the kinematic formation: for example, in the telic verbs (THROW, HIT), the end-point of the event is marked in the verb sign movement by significantly greater deceleration, as compared to atelic verbs (SWIM, TRAVEL). This end-point marker is highly robust regardless of position of the verb in the sentence (medial vs.

View Article and Find Full Text PDF

Event structure describes the relationships between general semantics (Aktionsart) of the verb and its syntactic properties, separating verbs into two classes: telic verbs, which denote change of state events with an inherent end-point or boundary (catch, rescue), and atelic, which refer to homogenous activities (tease, host). As telic verbs describe events, in which the internal argument (Patient) is affected, we hypothesized that processing of telic verb template would activate syntactic position of the Patient during sentence comprehension. Event-related brain potentials (ERPs) were recorded from 20 English speakers, who read sentences with reduced Object relative clauses, in which the verb was either telic or atelic.

View Article and Find Full Text PDF

Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia and Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et al.

View Article and Find Full Text PDF

The question to be addressed in this paper is how a language which is fundamentally monosyllabic in structure can have about a dozen different reduplication types with at least eight different linguistic functions. The language under discussion, American Sign Language (ASL), is one representative of a class of languages that makes widespread use of reduplication for lexical and morphological purposes. The goal here is to present the set of phonological features that permit the productive construction of these forms and a first approximation to the feature geometry in which they participate.

View Article and Find Full Text PDF

Early acquisition of a natural language, signed or spoken, has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life (Mayberry 2007, 2009). This review summarizes a number of recent neuroimaging studies in order to detail the neural bases of sign language acquisition. The logic of this review is to present research reports that contribute to the bigger picture showing that people who acquire a natural language, spoken or signed, in the normal way possess specialized linguistic abilities and brain functions that are missing or deficient in people whose exposure to natural language is delayed or absent.

View Article and Find Full Text PDF

Spoken languages are characterized by flexible, multivariate prosodic systems. As a natural language, American Sign Language (ASL), and other sign languages (SLs), are also expected to be characterized in the same way. Artificially created signing systems for classroom use, such as signed English, serve as a contrast to natural sign languages.

View Article and Find Full Text PDF