Publications by authors named "Chloe Stoll"

Purpose: Obesity is a clear risk factor for hypertension. Blood pressure (BP) measurement in obese patients may be biased by cuff size and upper arm shape which may affect the accuracy of measurements. This study aimed to assess the accuracy of the OptiBP smartphone application for three different body mass index (BMI) categories (normal, overweight and obese).

View Article and Find Full Text PDF

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved.

View Article and Find Full Text PDF

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel.

View Article and Find Full Text PDF

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field.

View Article and Find Full Text PDF

Studies have observed that deaf signers have a larger Visual Field (VF) than hearing non-signers with a particular large extension in the lower part of the VF. This increment could stem from early deafness or from the extensive use of sign language, since the lower VF is critical to perceive and understand linguistics gestures in sign language communication. The aim of the present study was to explore the potential impact of sign language experience without deafness on the VF sensitivity within its lower part.

View Article and Find Full Text PDF

Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers.

View Article and Find Full Text PDF

Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs).

View Article and Find Full Text PDF