This study proposes an automated system for assessing lung damage severity in coronavirus disease 2019 (COVID-19) patients using computed tomography (CT) images. These preprocessed CT images identify the extent of pulmonary parenchyma (PP) and ground-glass opacity and pulmonary infiltrates (GGO-PIs). Two types of images-saliency () image and discrete cosine transform (DCT) energy image-were generated from these images. A final fused (FF) image combining and DCT of PP and GGO-PI images was then obtained. Five convolutional neural networks (CNNs) and five classic classification techniques, trained using FF and grayscale PP images, were tested. Our study is aimed at showing that a CNN model, with preprocessed images as input, has significant advantages over grayscale images. Previous work in this field primarily focused on grayscale images, which presented some limitations. This paper demonstrates how optimal results can be obtained by using the FF image rather than just the grayscale PP image. As a result, CNN models outperformed traditional artificial intelligence classification techniques. Of these, Vgg16Net performed best, delivering top-tier classification results for COVID-19 severity assessment, with a recall rate of 95.38%, precision of 96%, accuracy of 95.84%, and area under the receiver operating characteristic (AUROC) curve of 0.9585; in addition, the Vgg16Net delivers the lowest false negative (FN) results. The dataset, comprising 44 COVID-19 patients, was split equally, with half used for training and half for testing.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11729514 | PMC |
http://dx.doi.org/10.1155/ijta/4420410 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!