Publications by authors named "Charissa Lansing"

Listeners weight acoustic cues in speech according to their reliability, but few studies have examined how cue weights change across the lifespan. Previous work has suggested that older adults have deficits in auditory temporal discrimination, which could affect the reliability of temporal phonetic cues, such as voice onset time (VOT), and in turn, impact speech perception in real-world listening environments. We addressed this by examining younger and older adults' use of VOT and onset F0 (a secondary phonetic cue) for voicing judgments (e.

View Article and Find Full Text PDF

Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision.

View Article and Find Full Text PDF

Objective: A pair of experiments investigated the hypothesis that bimodal (auditory-visual) speech presentation and expanded auditory bandwidth would improve speech intelligibility and increase working memory performance for older adults by reducing the cognitive effort needed for speech perception.

Background: Although telephone communication is important for helping older adults maintain social engagement, age-related sensory and working memory limits may make telephone conversations difficult.

Method: Older adults with either age-normal hearing or mild-to-moderate sensorineural hearing loss performed a running memory task.

View Article and Find Full Text PDF

Purpose: Two Web-based surveys (Surveys I and II) were used to assess perceptions of faculty and students in Communication Sciences and Disorders (CSD) regarding the responsible conduct of research (RCR).

Method: Survey questions addressed 9 RCR domains thought important to the responsible conduct of research: (a) human subjects protections; (b) research involving animals; (c) publication practices and responsible authorship; (d) mentor/trainee responsibilities; (e) collaborative science; (f) peer review; (g) data acquisition, management, sharing, and ownership; (h) conflicts of interest; and (i) research misconduct. Respondents rated each of 37 topics for importance and for sufficiency of instructional coverage.

View Article and Find Full Text PDF

Purpose: The purpose of this 2-part study was to determine the importance of specific topics relating to publication ethics and adequacy of the American Speech-Language-Hearing Association's (ASHA's) policies regarding these topics.

Method: A 56-item Web-based survey was sent to (a) ASHA journal editors, associate editors, and members of the Publications Board (Group 1); (b) authors, reviewers, and members of ASHA's Board of Ethics (Group 2); and (c) a random sample of the ASHA membership, characterized as journal readers (Group 3). The survey contained 4 questions related to ethical principles associated with the publication of research: (a) In regard to scientific integrity in research publications in general, how important is the issue of [topic]? (b) Should ASHA publication policies address this issue? (c) Do ASHA policies address this issue? (d) If yes, how adequately do ASHA policies address this issue? A second study evaluated the contents of ASHA's publication policy documents in regard to their coverage of the survey topics.

View Article and Find Full Text PDF

The goals of this study were to measure sensitivity to the direct-to-reverberant energy ratio (D/R) across a wide range of D/R values and to gain insight into which cues are used in the discrimination process. The main finding is that changes in D/R are discriminated primarily based on spectral cues. Temporal cues may be used but only when spectral cues are diminished or not available, while sensitivity to interaural cross-correlation is too low to be useful in any of the conditions tested.

View Article and Find Full Text PDF

Objectives: The purpose of this study was to examine characteristics of eye gaze behavior, specifically eye fixations, during reception of simultaneous communication (SC). SC was defined as conceptually accurate and semantically based signs and fingerspelling used in conjunction with speech. Specific areas of focus were (1) the pattern of frequency, duration, and location of observers' eye fixations in relation to the critical source of disambiguating information (signs or speech) in SC, and (2) how the pattern of an observer's eye fixations was related to the source of critical information (sign or speech), expectations regarding the location of the critical information after exposure to the stimulus set, observer characteristics, and sender.

View Article and Find Full Text PDF

Extraction of a target sound source amidst multiple interfering sound sources is difficult when there are fewer sensors than sources, as is the case for human listeners in the classic cocktail-party situation. This study compares the signal extraction performance of five algorithms using recordings of speech sources made with three different two-microphone arrays in three rooms of varying reverberation time. Test signals, consisting of two to five speech sources, were constructed for each room and array.

View Article and Find Full Text PDF

The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization.

View Article and Find Full Text PDF

In this study, we investigated where people look on talkers' faces as they try to understand what is being said. Sixteen young adults with normal hearing and demonstrated average speechreading proficiency were evaluated under two modality presentation conditions: vision only versus vision plus low-intensity sound. They were scored for the number of words correctly identified from 80 unconnected sentences spoken by two talkers.

View Article and Find Full Text PDF

Although Central Institute for the Deaf (CID) W-1 stimuli are routinely used for speech recognition threshold (SRT) testing, they are not always familiar to new learners of English and often lead to erroneous assessments. To improve test accuracy, alternative stimuli were constructed by pairing familiar English digits. These digit pairs were used to measure SRT for 12 non-native speakers of English and 12 native speakers of English.

View Article and Find Full Text PDF