In mixed reality (MR), augmenting virtual objects consistently with real-world illumination is one of the key factors that provide a realistic and immersive user experience. For this purpose, we propose a novel deep learning-based method to estimate high dynamic range (HDR) illumination from a single RGB image of a reference object. To obtain illumination of a current scene, previous approaches inserted a special camera in that scene, which may interfere with user's immersion, or they analyzed reflected radiances from a passive light probe with a specific type of materials or a known shape. The proposed method does not require any additional gadgets or strong prior cues, and aims to predict illumination from a single image of an observed object with a wide range of homogeneous materials and shapes. To effectively solve this ill-posed inverse rendering problem, three sequential deep neural networks are employed based on a physically-inspired design. These networks perform end-to-end regression to gradually decrease dependency on the material and shape. To cover various conditions, the proposed networks are trained on a large synthetic dataset generated by physically-based rendering. Finally, the reconstructed HDR illumination enables realistic image-based lighting of virtual objects in MR. Experimental results demonstrate the effectiveness of this approach compared against state-of-the-art methods. The paper also suggests some interesting MR applications in indoor and outdoor scenes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2020.2973050 | DOI Listing |
Health Justice
January 2025
Burnet Institute, Melbourne, Australia.
Background: During the COVID-19 pandemic, governments worldwide introduced law enforcement measures to deter and punish breaches of emergency public health orders. For example, in Victoria, Australia, discretionary fines of A$1,652 were issued for breaching stay-at-home orders, and A$4,957 fines for 'unlawful gatherings'; to date, approximately 30,000 fines remain outstanding or not paid in full. Studies globally have revealed how the expansion of policing powers produced significant collateral damage for marginalized populations, including people from low-income neighboorhoods, Indigenous Peoples, sex workers, and people from culturally diverse backgrounds.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
January 2025
Department of Experimental Clinical and Health Psychology, Ghent University.
Motivational theories of imitation state that we imitate because this led to positive social consequences in the past. Because movement imitation typically only leads to these consequences when perceived by the imitated person, it should increase when the interaction partner sees the imitator. Current evidence for this hypothesis is mixed, potentially due to the low ecological validity in previous studies.
View Article and Find Full Text PDFRev Med Suisse
January 2025
Service de neurologie, Clinique bernoise Montana, 3963 Crans-Montana.
Parkinson's disease affects around 6 million people worldwide. It causes both motor and non-motor symptoms. Since there is no cure, medical treatment aims to improve patients' quality of life.
View Article and Find Full Text PDFFront Psychol
January 2025
Department of Psychology, Università degli Studi di Torino, Turin, Italy.
J Bone Oncol
February 2025
School of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China.
Objective: Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This study proposes a U-Net model based on double dimensionality reduction and channel attention gating mechanism, namely the DCU-Net model for oncological image segmentation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!