Food image classification, an interesting subdomain of Computer Vision (CV) technology, focuses on the automatic classification of food items represented through images. This technology has gained immense attention in recent years thanks to its widespread applications spanning dietary monitoring and nutrition studies to restaurant recommendation systems. By leveraging the developments in Deep-Learning (DL) techniques, especially the Convolutional Neural Network (CNN), food image classification has been developed as an effective process for interacting with and understanding the nuances of the culinary world. The deep CNN-based automated food image classification method is a technology that utilizes DL approaches, particularly CNNs, for the automatic categorization and classification of the images of distinct kinds of foods. The current research article develops a Bio-Inspired Spotted Hyena Optimizer with a Deep Convolutional Neural Network-based Automated Food Image Classification (SHODCNN-FIC) approach. The main objective of the SHODCNN-FIC method is to recognize and classify food images into distinct types. The presented SHODCNN-FIC technique exploits the DL model with a hyperparameter tuning approach for the classification of food images. To accomplish this objective, the SHODCNN-FIC method exploits the DCNN-based Xception model to derive the feature vectors. Furthermore, the SHODCNN-FIC technique uses the SHO algorithm for optimal hyperparameter selection of the Xception model. The SHODCNN-FIC technique uses the Extreme Learning Machine (ELM) model for the detection and classification of food images. A detailed set of experiments was conducted to demonstrate the better food image classification performance of the proposed SHODCNN-FIC technique. The wide range of simulation outcomes confirmed the superior performance of the SHODCNN-FIC method over other DL models.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10604351 | PMC |
http://dx.doi.org/10.3390/biomimetics8060493 | DOI Listing |
BMC Med Imaging
January 2025
Faculty of Medicine, Department of Obstetrics and Gynecology, Erciyes University, Yenidogan Neighborhood, Turhan Baytop Street No:1, Kayseri, 38280, Turkey.
Aim: This study aimed to evaluate the effect of maternal vitamin D use during intrauterine life on fetal bone development using ultrasonographic image processing techniques.
Materials And Methods: We evaluated 52 pregnant women receiving vitamin D supplementation and 50 who refused vitamin D supplementation. Ultrasonographic imaging was performed on the fetal clavicle at 37-40 weeks of gestation.
Spectrochim Acta A Mol Biomol Spectrosc
January 2025
School of Precision Instrument and Opto-electronics Engineering, Tianjin University, Tianjin 300072 China. Electronic address:
The detection of pesticide residues in agricultural products is crucial for ensuring food safety. However, traditional methods are often constrained by slow processing speeds and a restricted analytical scope. This study presents a novel method that uses filter-array-based hyperspectral imaging enhanced by a dynamic filtering demosaicking algorithm, which significantly improves the speed and accuracy of detecting pesticide residues.
View Article and Find Full Text PDFJ Hazard Mater
January 2025
School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430070, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan 430070, China. Electronic address:
Artificial intelligence-assisted imaging biosensors have attracted increasing attention due to their flexibility, allowing for the digital image analysis and quantification of biomarkers. While deep learning methods have led to advancements in biomarker identification, the diversity in the density and adherence of targets still poses a serious challenge. In this regard, we propose CellNet, a neural network model specifically designed for detecting dense targets.
View Article and Find Full Text PDFBiol Rev Camb Philos Soc
January 2025
Wildlife Observatory of Australia (WildObs), Queensland Cyber Infrastructure Foundation (QCIF), Brisbane, Queensland, 4072, Australia.
Camera traps are widely used in wildlife research and monitoring, so it is imperative to understand their strengths, limitations, and potential for increasing impact. We investigated a decade of use of wildlife cameras (2012-2022) with a case study on Australian terrestrial vertebrates using a multifaceted approach. We (i) synthesised information from a literature review; (ii) conducted an online questionnaire of 132 professionals; (iii) hosted an in-person workshop of 28 leading experts representing academia, non-governmental organisations (NGOs), and government; and (iv) mapped camera trap usage based on all sources.
View Article and Find Full Text PDFSci Rep
January 2025
Department of Veterinary Anatomy, The University of Tokyo, Yayoi 1-1-1, Bunkyo-ku, Tokyo, 113-8657, Japan.
An aqueous solution of a common food dye, Fast Green FCF (FG), mimics cholyl-lysyl-fluorescein to visualize embryonic bile flow via single peritoneal injection into intrauterine mouse embryos. Despite its efficacy in embryos, its suitability for adult mice and small to medium-sized mammals remained uncertain. In this study, we investigated FG cholangiography in adult mice, dogs, and goats.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!