The paper aims to explore the current state of understanding surrounding in silico oral modelling. This involves exploring methodologies, technologies and approaches pertaining to the modelling of the whole oral cavity; both internally and externally visible structures that may be relevant or appropriate to oral actions. Such a model could be referred to as a 'complete model' which includes consideration of a full set of facial features (i.
View Article and Find Full Text PDFBackground: While efforts to establish best practices with functional near infrared spectroscopy (fNIRS) signal processing have been published, there are still no community standards for applying machine learning to fNIRS data. Moreover, the lack of open source benchmarks and standard expectations for reporting means that published works often claim high generalisation capabilities, but with poor practices or missing details in the paper. These issues make it hard to evaluate the performance of models when it comes to choosing them for brain-computer interfaces.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
February 2024
Automatically recognising apparent emotions from face and voice is hard, in part because of various sources of uncertainty, including in the input data and the labels used in a machine learning framework. This paper introduces an uncertainty-aware multimodal fusion approach that quantifies modality-wise aleatoric or data uncertainty towards emotion prediction. We propose a novel fusion framework, in which latent distributions over unimodal temporal context are learned by constraining their variance.
View Article and Find Full Text PDFAdvances in neonatal care have resulted in improved outcomes for high-risk newborns with technologies playing a significant part although many were developed for the neonatal intensive care unit. The care provided in the delivery room (DR) during the first few minutes of life can impact short- and long-term neonatal outcomes. Increasingly, technologies have a critical role to play in the DR particularly with monitoring and information provision.
View Article and Find Full Text PDFA baby's gestational age determines whether or not they are premature, which helps clinicians decide on suitable post-natal treatment. The most accurate dating methods use Ultrasound Scan (USS) machines, but these are expensive, require trained personnel and cannot always be deployed to remote areas. In the absence of USS, the Ballard Score, a postnatal clinical examination, can be used.
View Article and Find Full Text PDFProc Int Conf Autom Face Gesture Recognit
June 2017
The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
September 2018
Linear regression is a fundamental building block in many face detection and tracking algorithms, typically used to predict shape displacements from image features through a linear mapping. This paper presents a Functional Regression solution to the least squares problem, which we coin Continuous Regression, resulting in the first real-time incremental face tracker. Contrary to prior work in Functional Regression, in which B-splines or Fourier series were used, we propose to approximate the input space by its first-order Taylor expansion, yielding a closed-form solution for the continuous domain of displacements.
View Article and Find Full Text PDFPain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions.
View Article and Find Full Text PDFIEEE Trans Cybern
February 2014
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
May 2013
We propose a new algorithm to detect facial points in frontal and near-frontal face images. It combines a regression-based approach with a probabilistic graphical model-based face shape model that restricts the search to anthropomorphically consistent regions. While most regression-based approaches perform a sequential approximation of the target location, our algorithm detects the target location by aggregating the estimates obtained from stochastically selected local appearance information into a single robust prediction.
View Article and Find Full Text PDFIEEE Trans Syst Man Cybern B Cybern
February 2012
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.
View Article and Find Full Text PDFMed Image Comput Comput Assist Interv
December 2008
This paper presents a new framework for the analysis of anatomical connectivity derived from diffusion tensor MRI. The framework has been applied to estimate whole brain structural networks using diffusion data from 174 adult subjects. In the proposed approach, each brain is first segmented into 83 anatomical regions via label propagation of multiple atlases and subsequent decision fusion.
View Article and Find Full Text PDF