Recent advances in nonparametric contrast sensitivity function (CSF) estimation have yielded a new tradeoff between accuracy and efficiency not available to classical parametric estimators. An additional advantage of this new framework is the ability to independently tune multiple aspects of the estimator to seek further improvements. Machine learning CSF estimation with Gaussian processes allows for design optimization in the kernel, acquisition function, and underlying task representation, to name a few.
View Article and Find Full Text PDFComputational audiology (CA) has grown over the last few years with the improvement of computing power and the growth of machine learning (ML) models. There are today several audiogram databases which have been used to improve the accuracy of CA models as well as reduce testing time and diagnostic complexity. However, these CA models have mainly been trained on single populations.
View Article and Find Full Text PDFRecent advances in nonparametric Contrast Sensitivity Function (CSF) estimation have yielded a new tradeoff between accuracy and efficiency not available to classical parametric estimators. An additional advantage of this new framework is the ability to independently tune multiple aspects of the estimator to seek further improvements. Machine Learning CSF (MLCSF) estimation with Gaussian processes allows for design optimization in the kernel, acquisition function and underlying task representation, to name a few.
View Article and Find Full Text PDFMultidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast sensitivity functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function.
View Article and Find Full Text PDFPurpose: Pediatric brain tumor patients often experience significant cognitive sequelae. Resting-state functional MRI (rsfMRI) provides a measure of brain network organization, and we hypothesize that pediatric brain tumor patients treated with proton therapy will demonstrate abnormal brain network architecture related to cognitive outcome and radiation dosimetry.
Participants And Methods: Pediatric brain tumor patients treated with proton therapy were enrolled on a prospective study of cognitive assessment using the NIH Toolbox Cognitive Domain.
Survivors of pediatric brain tumors experience significant cognitive deficits from their diagnosis and treatment. The exact mechanisms of cognitive injury are poorly understood, and validated predictors of long-term cognitive outcome are lacking. Resting state functional magnetic resonance imaging allows for the study of the spontaneous fluctuations in bulk neural activity, providing insight into brain organization and function.
View Article and Find Full Text PDFPeople differ considerably in the extent to which they benefit from working memory (WM) training. Although there is increasing research focusing on individual differences associated with WM training outcomes, we still lack an understanding of which specific individual differences, and in what combination, contribute to inter-individual variations in training trajectories. In the current study, 568 undergraduates completed one of several N-back intervention variants over the course of two weeks.
View Article and Find Full Text PDFUnlabelled: Multidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast Sensitivity Functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function.
View Article and Find Full Text PDFThe global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this article, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology.
View Article and Find Full Text PDFHidden hearing loss manifests as speech perception difficulties with normal hearing thresholds. A new study shows that neural compensation induced by this disorder may actually improve speech perception under narrow conditions within an overall profile of degradation.
View Article and Find Full Text PDFThe goal of precision medicine (individually tailored treatments) is not being achieved for neurobehavioural conditions such as psychiatric disorders. Traditional randomized clinical trial methods are insufficient for advancing precision medicine because of the dynamic complexity of these conditions. We present a pragmatic solution: the precision clinical trial framework, encompassing methods for individually tailored treatments.
View Article and Find Full Text PDFObjectives: When one ear of an individual can hear significantly better than the other ear, evaluating the worse ear with loud probe tones may require delivering masking noise to the better ear to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious, and time consuming. Adding a standardized masking protocol to an active machine learning audiogram procedure could potentially alleviate all of these drawbacks by dynamically adapting the masking as needed for each individual.
View Article and Find Full Text PDFThe gold standard clinical tool for evaluating visual dysfunction in cases of glaucoma and other disorders of vision remains the visual field or threshold perimetry exam. Administration of this exam has evolved over the years into a sophisticated, standardized, automated algorithm that relies heavily on specifics of disease processes particular to common retinal disorders. The purpose of this study is to evaluate the utility of a novel general estimator applied to visual field testing.
View Article and Find Full Text PDFSpeech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425, 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall, Kroos, Jozan, & Vatikiotis-Bateson in Perception & Psychophysics, 66(4), 574-583, 2004), and static or moving shapes can improve speech detection accuracy (Bernstein, Auer, & Takayanagi in Speech Communication, 44(1-4), 5-18, 2004), aspects of the visual signal other than fine phonetic detail may also contribute to the perception of speech.
View Article and Find Full Text PDFOur intuition regarding "average" is rooted in one-dimensional thinking, such as the distribution of height across a population. This intuition breaks down in higher dimensions when multiple measurements are combined: fewer individuals are close to average for many measurements simultaneously than for any single measurement alone. This phenomenon is known as the curse of dimensionality.
View Article and Find Full Text PDFObjectives: A confluence of recent developments in cloud computing, real-time web audio and machine learning psychometric function estimation has made wide dissemination of sophisticated turn-key audiometric assessments possible. The authors have combined these capabilities into an online (i.e.
View Article and Find Full Text PDFAtten Percept Psychophys
August 2018
The original version of this article neglected to mention a conflict of interest. DLB has a patent pending on technology described in this manuscript.
View Article and Find Full Text PDFBehavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation.
View Article and Find Full Text PDFSpeech recognition is improved when the acoustic input is accompanied by visual cues provided by a talking face (Erber in Journal of Speech and Hearing Research, 12(2), 423-425 1969; Sumby & Pollack in The Journal of the Acoustical Society of America, 26(2), 212-215, 1954). One way that the visual signal facilitates speech recognition is by providing the listener with information about fine phonetic detail that complements information from the auditory signal. However, given that degraded face stimuli can still improve speech recognition accuracy (Munhall et al.
View Article and Find Full Text PDFAtten Percept Psychophys
April 2018
Psychometric functions are typically estimated by fitting a parametric model to categorical subject responses. Procedures to estimate unidimensional psychometric functions (i.e.
View Article and Find Full Text PDFThe notion that neurons with higher selectivity carry more information about external sensory inputs is widely accepted in neuroscience. High-selectivity neurons respond to a narrow range of sensory inputs, and thus would be considered highly informative by rejecting a large proportion of possible inputs. In auditory cortex, neuronal responses are less selective immediately after the onset of a sound and then become highly selective in the following sustained response epoch.
View Article and Find Full Text PDFNeurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations.
View Article and Find Full Text PDF