Acoustically distinct and perceptually ambiguous: ʔayʔaǰuθəm (Salish) fricatives.

J Acoust Soc Am

Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada.

Published: April 2020

ʔayʔaǰuθəm (Comox-Sliammon) is a Central Salish language spoken in British Columbia with a large fricative inventory. Previous impressionistic descriptions of ʔayʔaǰuθəm have noted perceptual ambiguity of select anterior fricatives. This paper provides an auditory-acoustic description of the four anterior fricatives /θ s ʃ ɬ/ in the Mainland dialect of ʔayʔaǰuθəm. Peak ERB trajectories, noise duration, and formant transitions are analysed in the fricative productions of five speakers. These analyses provide quantitative and qualitative descriptions of these fricative contrasts, indicating more robust acoustic differentiation for fricatives in onset versus coda position. In a perception task, English listeners categorized fricatives in CV and VC sequences from the natural productions. The results of the perception experiment are consistent with reported perceptual ambiguity between /s/ and /θ/, with listeners frequently misidentifying /θ/ as /s/. The production and perception data suggest that listener L1 categories play a role in the categorization and discrimination of ʔayʔaǰuθəm fricatives. These findings provide an empirical description of fricatives in an understudied language and have implications for L2 teaching and learning in language revitalization contexts.

Download full-text PDF

Source
http://dx.doi.org/10.1121/10.0001007DOI Listing

Publication Analysis

Top Keywords

perceptual ambiguity
8
anterior fricatives
8
fricatives
7
ʔayʔaǰuθəm
5
acoustically distinct
4
distinct perceptually
4
perceptually ambiguous
4
ambiguous ʔayʔaǰuθəm
4
ʔayʔaǰuθəm salish
4
salish fricatives
4

Similar Publications

We examined categorical processing biases in the perception and recognition of facial expressions of emotion across two studies. In both studies, participants first learned to discriminate between two ambiguous facial expressions of emotion selected from the middle of a continuous array of blended expressions (i.e.

View Article and Find Full Text PDF

Chemosensory Cues Modulate Women's Jealousy Responses to Vocal Femininity.

Arch Sex Behav

January 2025

Department of Applied Social Sciences, Hong Kong Polytechnic University, Hung Hom, Hong Kong, China.

Jealousy responses to potential mating rivals are stronger when those rivals display cues indicating higher mate quality. One such cue is vocal femininity in women's voices, with higher-pitched voices eliciting greater jealousy responses. However, cues to mate quality are not evaluated in isolation.

View Article and Find Full Text PDF

Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.

View Article and Find Full Text PDF

In response to the current situation of backward automation levels, heavy labor intensities, and high accident rates in the underground coal mine auxiliary transportation system, the mining trackless auxiliary transportation robot (MTATBOT) is presented in this paper. The MTATBOT is specially designed for long-range, space-constrained, and explosion-proof underground coal mine environments. With an onboard perception and autopilot system, the MTATBOT can perform automated and unmanned subterranean material transportation.

View Article and Find Full Text PDF

Facial Emotion Recognition and its Associations With Psychological Well-Being Across Four Schizotypal Dimensions: a Cross-Sectional Study.

Arch Clin Neuropsychol

January 2025

Laboratory of Neuropsychology, Department of Psychology, School of Social Sciences, Gallos University campus, University of Crete, Rethymno 74100, Greece.

Objective: The present study aimed to examine facial emotion recognition in a sample from the general population with elevated schizotypal traits, as defined by the four-factor model of schizotypy, and the association of facial emotion recognition and the schizotypal dimensions with psychological well-being.

Method: Two hundred and thirty-eight participants were allocated into four schizotypal groups and one control group. Following a cross-sectional study design, facial emotion recognition was assessed with a computerized task that included images from the Radboud Faces Database, schizotypal traits were measured with the Schizotypal Personality Questionnaire, and psychological well-being was evaluated with the Flourishing scale.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!