Multispectral imaging (MSI) enables the acquisition of spatial and spectral image-based information in one process. Spectral scene information can be used to determine the characteristics of materials based on reflection or absorption and thus their material compositions. This work focuses on so-called multi aperture imaging, which enables a simultaneous capture (snapshot) of spectrally selective and spatially resolved scene information. There are some limiting factors for the spectral resolution when implementing this imaging principle, e.g., usable sensor resolutions and area, and required spatial scene resolution or optical complexity. Careful analysis is therefore needed for the specification of the multispectral system properties and its realisation. In this work we present a systematic approach for the application-related implementation of this kind of MSI. We focus on spectral system modeling, data analysis, and machine learning to build a universally usable multispectral loop to find the best sensor configuration. The approach presented is demonstrated and tested on the classification of waste, a typical application for multispectral imaging.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679387 | PMC |
http://dx.doi.org/10.3390/s24247984 | DOI Listing |
JMIR Res Protoc
January 2025
Decipher Health, Delhi, India.
Background: Type 2 diabetes (T2D) is a leading cause of premature morbidity and mortality globally and affects more than 100 million people in the world's most populous country, India. Nutrition is a critical and evidence-based component of effective blood glucose control and most dietary advice emphasizes carbohydrate and calorie reduction. Emerging global evidence demonstrates marked interindividual differences in postprandial glucose response (PPGR) although no such data exists in India and previous studies have primarily evaluated PPGR variation in individuals without diabetes.
View Article and Find Full Text PDFTransl Vis Sci Technol
January 2025
School of Optometry and Vision Science, University of New South Wales, Sydney, Australia.
Purpose: The purpose of this study was to develop and validate a deep-learning model for noninvasive anemia detection, hemoglobin (Hb) level estimation, and identification of anemia-related retinal features using fundus images.
Methods: The dataset included 2265 participants aged 40 years and above from a population-based study in South India. The dataset included ocular and systemic clinical parameters, dilated retinal fundus images, and hematological data such as complete blood counts and Hb concentration levels.
Transl Vis Sci Technol
January 2025
Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand.
Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test.
Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model.
JAMA Netw Open
January 2025
Department of Child and Adolescent Psychiatry-Psychotherapy, University Hospital Ulm, Ulm, Germany.
Importance: Associations between child maltreatment (CM) and health have been studied broadly, but most studies focus on multiplicity (number of experienced subtypes of CM). Studies assessing multiple CM characteristics are scarce, partly due to methodological challenges, and were mostly conducted in patient samples.
Objective: To determine the importance of CM characteristics in association with physical multimorbidity in adulthood for women and men in a German representative sample.
Proc Natl Acad Sci U S A
January 2025
Department of Psychology, City College, City University of New York, New York, NY 10031.
Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive the world in the absence of active physiology, deliberative thought, or any form of feedback that resembles human affective experience offer tools to demystify the relationship between seeing and feeling, and to assess how much of visually evoked affective experiences may be a straightforward function of representation learning over natural image statistics. In this work, we deploy a diverse sample of 180 state-of-the-art deep neural network models trained only on canonical computer vision tasks to predict human ratings of arousal, valence, and beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!