Over the last few years, camera-based estimation of vital signs referred to as imaging photoplethysmography (iPPG) has garnered significant attention due to the relative simplicity, ease, unobtrusiveness and flexibility offered by such measurements. It is expected that iPPG may be integrated into a host of emerging applications in areas as diverse as autonomous cars, neonatal monitoring, and telemedicine. In spite of this potential, the primary challenge of non-contact camera-based measurements is the relative motion between the camera and the subjects. Current techniques employ 2D feature tracking to reduce the effect of subject and camera motion but they are limited to handling translational and in-plane motion. In this paper, we study, for the first-time, the utility of 3D face tracking to allow iPPG to retain robust performance even in presence of out-of-plane and large relative motions. We use a RGB-D camera to obtain 3D information from the subjects and use the spatial and depth information to fit a 3D face model and track the model over the video frames. This allows us to estimate correspondence over the entire video with pixel-level accuracy, even in the presence of out-of-plane or large motions. We then estimate iPPG from the warped video data that ensures per-pixel correspondence over the entire window-length used for estimation. Our experiments demonstrate improvement in robustness when head motion is large.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC44109.2020.9176065DOI Listing

Publication Analysis

Top Keywords

camera subjects
8
presence out-of-plane
8
out-of-plane large
8
correspondence entire
8
ppg3d head
4
head tracking
4
tracking improve
4
improve camera-based
4
camera-based ppg
4
ppg estimation?
4

Similar Publications

Background Prior to using the exoscope, we speculated that it represented an intermediate tool between a loupe and a microscope and had concerns about its visibility of deep, fine structures. Objective To evaluate the depths of meningioma for which the exoscope was suitable, and to clarify its disadvantages in meningioma resection. Methods Findings of consecutive meningioma surgeries using a 4K three-dimensional (3D) exoscope over a one-year period were evaluated for visibility of the surgical field, comfort of the surgeon's arm posture, the surgeon's head orientation, and perception of the image delay, accounting for the depth of the tumor.

View Article and Find Full Text PDF

Background: Training occupational therapy students in manual skills such as goniometry typically requires intensive one on one student teacher interactions and repetitive practice to ensure competency. There is evidence that immersive virtual reality (IVR) supported embodied learning can improve confidence and performance of skills. Embodied learning refers to learner's experience of viewing a simulated body and its properties as if they were the learner's own biological body.

View Article and Find Full Text PDF

Background: Pes planus (flatfoot) and pes cavus (high arch foot) are common foot deformities, often requiring clinical and radiographic assessment for diagnosis and potential subsequent management. Traditional diagnostic methods, while effective, pose limitations such as cost, radiation exposure, and accessibility, particularly in underserved areas.

Aim: To develop deep learning algorithms that detect and classify such deformities using smartphone cameras.

View Article and Find Full Text PDF

Remote Extended Reality with Markerless Motion Tracking for Sitting Posture Training.

IEEE Robot Autom Lett

November 2024

Department of Mechanical Engineering, Columbia University, New York, NY, 10027, USA.; Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY, 10027, USA.

Dynamic postural control during sitting is essential for functional mobility and daily activities. Extended reality (XR) presents a promising solution for posture training in addressing conventional training limitations related to patient accessibility and ecological validity. We developed a remote XR rehabilitation system with markerless motion tracking for sitting posture training.

View Article and Find Full Text PDF

Objectives: This study compared the clinical accuracy of two different stationary face scanners, employing progressive capture and multi-view simultaneous capture scanning technologies.

Methods: Forty dentate volunteers participated in the study. Soft tissue landmarks were marked with a pen on the participants' faces to measure the distances between them.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!