When making phone calls, cellphone and smartphone users are exposed to radio-frequency (RF) electromagnetic fields (EMFs) and sound pressure simultaneously. Speech intelligibility during mobile phone calls is related to the sound pressure level of speech relative to potential background sounds and also to the RF-EMF exposure, since the signal quality is correlated with the RF-EMF strength. Additionally, speech intelligibility, sound pressure level, and exposure to RF-EMFs are dependent on how the call is made (on speaker, held at the ear, or with headsets). The relationship between speech intelligibility, sound exposure, and exposure to RF-EMFs is determined in this study. To this aim, the transmitted RF-EMF power was recorded during phone calls made by 53 subjects in three different, controlled exposure scenarios: calling with the phone at the ear, calling in speaker-mode, and calling with a headset. This emitted power is directly proportional to the exposure to RF EMFs and is translated into specific absorption rate using numerical simulations. Simultaneously, sound pressure levels have been recorded and speech intelligibility has been assessed during each phone call. The results show that exposure to RF-EMFs, quantified as the specific absorption in the head, will be reduced when speaker-mode or a headset is used, in comparison to calling next to the ear. Additionally, personal exposure to sound pressure is also found to be highest in the condition where the phone is held next to the ear. On the other hand, speech perception is found to be the best when calling with a phone next to the ear in comparison to the other studied conditions, when background noise is present.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.envres.2019.05.006DOI Listing

Publication Analysis

Top Keywords

speech intelligibility
20
sound pressure
20
phone calls
12
exposure rf-emfs
12
exposure
9
radio-frequency electromagnetic
8
electromagnetic fields
8
pressure level
8
intelligibility sound
8
held ear
8

Similar Publications

Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults.

View Article and Find Full Text PDF

Script training is a speech-language intervention designed to promote fluent connected speech via repeated rehearsal of functional content. This type of treatment has proven beneficial for individuals with aphasia and apraxia of speech caused by stroke and, more recently, for individuals with primary progressive aphasia (PPA). In the largest study to-date evaluating the efficacy of script training in individuals with nonfluent/agrammatic primary progressive aphasia (nfvPPA; Henry et al.

View Article and Find Full Text PDF

Unlabelled: Central auditory disorders (CSD) - this is a violation of the processing of sound stimuli, including speech, above the cochlear nuclei of the brain stem, which is mainly manifested by difficulties in speech recognition, especially in noisy environments. Children with this pathology are more likely to have behavioral problems, impaired auditory, linguistic and cognitive development, and especially difficulties with learning at school.

Objective: To analyze the literature data on the epidemiology of central auditory disorders in school-age children.

View Article and Find Full Text PDF

How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?

Audiol Res

December 2024

Division of Audiology, Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN 55902, USA.

Background/objectives: Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition under the best hearing aid conditions, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) noise reduction on cochlear implant evaluation.

Methods: Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratios (SNRs) through one loudspeaker.

View Article and Find Full Text PDF

Background/objectives: Understanding speech in background noise is a challenging task for listeners with normal hearing and even more so for individuals with hearing impairments. The primary objective of this study was to develop Romanian speech material in noise to assess speech perception in diverse auditory populations, including individuals with normal hearing and those with various types of hearing loss. The goal was to create a versatile tool that can be used in different configurations and expanded for future studies examining auditory performance across various populations and rehabilitation methods.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!