In this paper, a simplified yet efficient architecture of a deep convolutional neural network is presented for lung image classification. The images used for classification are computed tomography (CT) scan images obtained from two scientifically used databases available publicly. Six external shape-based features, viz. solidity, circularity, discrete Fourier transform of radial length (RL) function, histogram of oriented gradient (HOG), moment, and histogram of active contour image, have also been identified and embedded into the proposed convolutional neural network. The performance is measured in terms of average recall and average precision values and compared with six similar methods for biomedical image classification. The average precision obtained for the proposed system is found to be 95.26% and the average recall value is found to be 69.56% in average for the two databases.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7064668 | PMC |
http://dx.doi.org/10.1007/s10278-019-00245-9 | DOI Listing |
Biomed Phys Eng Express
January 2025
National School of Electronics and Telecommunication of Sfax, Sfax rte mahdia, sfax, sfax, 3012, TUNISIA.
Deep learning has emerged as a powerful tool in medical imaging, particularly for corneal topographic map classification. However, the scarcity of labeled data poses a significant challenge to achieving robust performance. This study investigates the impact of various data augmentation strategies on enhancing the performance of a customized convolutional neural network model for corneal topographic map classification.
View Article and Find Full Text PDFBioinformatics
January 2025
College of Artificial Intelligence, Nankai University, Tianjin, 300350, China.
Motivation: The drug-disease, gene-disease, and drug-gene relationships, as high-frequency edge types, describe complex biological processes within the biomedical knowledge graph. The structural patterns formed by these three edges are the graph motifs of (disease, drug, gene) triplets. Among them, the triangle is a steady and important motif structure in the network, and other various motifs different from the triangle also indicate rich semantic relationships.
View Article and Find Full Text PDFNetwork
January 2025
Department of Computer Science and Engineering, Knowledge Institute of Technology, Salem, India.
The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results.
View Article and Find Full Text PDFRadiol Med
January 2025
Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China.
Background: Accurate differentiation between benign and malignant pancreatic lesions is critical for effective patient management. This study aimed to develop and validate a novel deep learning network using baseline computed tomography (CT) images to predict the classification of pancreatic lesions.
Methods: This retrospective study included 864 patients (422 men, 442 women) with confirmed histopathological results across three medical centers, forming a training cohort, internal testing cohort, and external validation cohort.
ACS Sens
January 2025
Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea.
Semiconductor metal oxide (SMO) gas sensors are gaining prominence owing to their high sensitivity, rapid response, and cost-effectiveness. These sensors detect changes in resistance resulting from oxidation-reduction reactions with target gases, responding to a variety of gases simultaneously. However, their inherent limitations lie in selectivity.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!