We present a fast and accurate method for dense depth reconstruction, which is specifically tailored to process sparse, wide-baseline light field data captured with camera arrays. In our method, the source images are over-segmented into non-overlapping compact superpixels. We model superpixel as planar patches in the image space and use them as basic primitives for depth estimation. Such superpixel-based representation yields desired reduction in both memory and computation requirements while preserving image geometry with respect to the object contours. The initial depth maps, obtained by plane-sweeping independently for each view, are jointly refined via iterative belief-propagation-like optimization in superpixel domain. During the optimization, smoothness between the neighboring superpixels and geometric consistency between the views are enforced. To ensure rapid information propagation into textureless and occluded regions, together with the immediate superpixel neighbors, candidates from larger neighborhoods are sampled. Additionally, in order to make full use of the parallel graphics hardware a synchronous message update schedule is employed allowing to process all the superpixels of all the images at once. This way, the distribution of the scene geometry becomes distinctive already after the first iterations, facilitating stability and fast convergence of the refinement procedure. We demonstrate that a few refinement iterations result in globally consistent dense depth maps even in the presence of wide textureless regions and occlusions. The experiments show that while the depth reconstruction takes about a second per full high-definition view, the accuracy of the obtained depth maps is comparable with the state-of-the-art results, which otherwise require much longer processing time.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2019.2959233 | DOI Listing |
Alzheimers Dement
December 2024
German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.
Background: Analysis of neuroimaging data based on convolutional neural networks (CNNs) can improve detection of clinically relevant characteristics of patients with Alzheimer's disease (AD). Previously, our group developed a CNN-based approach for detecting AD via magnetic resonance imaging (MRI) scans and for identifying features that are relevant to the decision of the network. In the current study, we aimed to evaluate the potential utility of applying this approach to MRI scans to assist in the identification of individuals at high risk for amyloid positivity to aid in the selection of study samples and case finding for treatment.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany.
Background: Analysis of neuroimaging data based on convolutional neural networks (CNNs) can improve detection of clinically relevant characteristics of patients with Alzheimer's disease (AD). Previously, our group developed a CNN-based approach for detecting AD via magnetic resonance imaging (MRI) scans and for identifying features that are relevant to the decision of the network. In the current study, we aimed to evaluate the potential utility of applying this approach to MRI scans to assist in the identification of individuals at high risk for amyloid positivity to aid in the selection of study samples and case finding for treatment.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
University of Maryland Baltimore, Baltimore, MD, USA.
Background: Low-resource residential long-term care (LTC) settings, including settings located in medically underserved and health professional shortage communities, have fewer environmental resources to support high-quality, robust data collection and use of measures to support person-centered dementia care (PCC). Further, such settings are more likely to serve older adults from populations that have experienced historic harms related to misuse of personal data, including low-income and minoritized populations. Design thinking engages community-members to understand a problem from the end-user's perspective (empathize and define), brainstorm new solutions (ideate), and develop proposed solutions (prototype and test).
View Article and Find Full Text PDFJMIR Form Res
January 2025
Big Data Convergence and Open Sharing System, Seoul National University, Seoul, Republic of Korea.
Background: The rapid proliferation of artificial intelligence (AI) requires new approaches for human-AI interfaces that are different from classic human-computer interfaces. In developing a system that is conducive to the analysis and use of health big data (HBD), reflecting the empirical characteristics of users who have performed HBD analysis is the most crucial aspect to consider. Recently, human-centered design methodology, a field of user-centered design, has been expanded and is used not only to develop types of products but also technologies and services.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Materials Science, Case Western Reserve University, Cleveland, 44106, USA.
Understanding subsurface temperature variations is crucial for assessing material degradation in underground structures. This study maps subsurface temperatures across the contiguous United States for depths from 50 to 3500 m, comparing linear interpolation, gradient boosting (LightGBM), neural networks, and a novel hybrid approach combining linear interpolation with LightGBM. Results reveal heterogeneous temperature patterns both horizontally and vertically.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!