Infrared (IR) spectroscopic imaging is of potentially wide use in medical imaging applications due to its ability to capture both chemical and spatial information. This complexity of the data both necessitates using machine intelligence as well as presents an opportunity to harness a high-dimensionality data set that offers far more information than today's manually-interpreted images. While convolutional neural networks (CNNs), including the well-known U-Net model, have demonstrated impressive performance in image segmentation, the inherent locality of convolution limits the effectiveness of these models for encoding IR data, resulting in suboptimal performance.
View Article and Find Full Text PDFHigh-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques.
View Article and Find Full Text PDF