Background: To reduce radiation exposure and improve diagnosis in low-dose computed tomography, several deep learning (DL)-based image denoising methods have been proposed to suppress noise and artifacts over the past few years. However, most of them seek an objective data distribution approximating the gold standard and neglect structural semantic preservation. Moreover, the numerical response in CT images presents substantial regional anatomical differences among tissues in terms of X-ray absorbency.
Methods: In this paper, we introduce structural semantic information for low-dose CT imaging. First, the regional segmentation prior to low-dose CT can guide the denoising process. Second the structural semantical results can be considered as evaluation metrics on the estimated normal-dose CT images. Then, a semantic feature transform is engaged to combine the semantic and image features on a semantic fusion module. In addition, the structural semantic loss function is introduced to measure the segmentation difference.
Results: Experiments are conducted on clinical abdomen data obtained from a clinical hospital, and the semantic labels consist of subcutaneous fat, muscle and visceral fat associated with body physical evaluation. Compared with other DL-based methods, the proposed method achieves better performance on quantitative metrics and better semantic evaluation.
Conclusions: The quantitative experimental results demonstrate the promising performance of the proposed methods in noise reduction and structural semantic preservation. While, the proposed method may suffer from several limitations on abnormalities, unknown noise and different manufacturers. In the future, the proposed method will be further explored, and wider applications in PET/CT and PET/MR will be sought.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cmpb.2022.107199 | DOI Listing |
Cogn Neurodyn
December 2025
Image Processing Laboratory, University of Valencia, Valencia, Spain.
In recent years, substantial strides have been made in the field of visual image reconstruction, particularly in its capacity to generate high-quality visual representations from human brain activity while considering semantic information. This advancement not only enables the recreation of visual content but also provides valuable insights into the intricate processes occurring within high-order functional brain regions, contributing to a deeper understanding of brain function. However, considering fusion semantics in reconstructing visual images from brain activity involves semantic-to-image guide reconstruction and may ignore underlying neural computational mechanisms, which does not represent true reconstruction from brain activity.
View Article and Find Full Text PDFAm J Geriatr Psychiatry
December 2024
Department of Clinical and Experimental Sciences (DA, BB), University of Brescia, Brescia, Italy; Molecular Markers Laboratory (BB), IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, Brescia, Italy. Electronic address:
Objectives: The present study aims to assess the prevalence, associated clinical symptoms, longitudinal changes, and imaging correlates of Loss of Insight (LOI), which is still unexplored in syndromes associated with Frontotemporal Lobar Degeneration (FTLD).
Design: Retrospective longitudinal cohort study, from Oct 2009 to Feb 2023.
Setting: Tertiary Frontotemporal Dementia research clinic.
Neural Netw
January 2025
State Key Laboratory of Public Big Data, Guizhou University, 550025, China; Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Guizhou University, 550025, China; College of Computer Science and Technology, Guizhou University, 550025, China. Electronic address:
Relation extraction independently verifies all entity pairs in a sentence to identify predefined relationships between named entities. Because these entity pairs share the same contextual features of a sentence, they lead to a complicated semantic structure. To distinguish semantic expressions between relation instances, manually designed rules or elaborate deep architectures are usually applied to learn task-relevant representations.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Faculty of Science and Environmental Studies, Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada.
In recent years, significant progress has been achieved in understanding and processing tabular data. However, existing approaches often rely on task-specific features and model architectures, posing challenges in accurately extracting table structures amidst diverse layouts, styles, and noise contamination. This study introduces a comprehensive deep learning methodology that is tailored for the precise identification and extraction of rows and columns from document images that contain tables.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Master's Program in Information and Computer Science, Doshisha University, Kyoto 610-0394, Japan.
The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!