In the last few decades, major progress has been made in the medical field; in particular, new treatments and advanced health technologies allow for considerable improvements in life expectancy and, more broadly, in quality of life. As a consequence, the number of elderly people is expected to increase in the following years. This trend, along with the need to improve the independence of frail people, has led to the development of unobtrusive solutions to monitor daily activities and provide feedback in case of risky situations and falls. Monitoring devices based on radar sensors represent a possible approach to tackle postural analysis while preserving the person's privacy and are especially useful in domestic environments. This work presents an innovative solution that combines millimeter-wave radar technology with artificial intelligence (AI) to detect different types of postures: a series of algorithms and neural network methodologies are evaluated using experimental acquisitions with healthy subjects. All methods produce very good results according to the main parameters evaluating performance; the long short-term memory (LSTM) and GRU show the most consistent results while, at the same time, maintaining reduced computational complexity, thus providing a very good candidate to be implemented in a dedicated embedded system designed to monitor postures.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11478366 | PMC |
http://dx.doi.org/10.3390/s24196208 | DOI Listing |
Geroscience
January 2025
Department of Neurology, Ewha Womans University Mokdong Hospital, Ewha Womans University College of Medicine, Seoul, Republic of Korea.
Background: Superagers, older adults with exceptional cognitive abilities, show preserved brain structure compared to typical older adults. We investigated whether superagers have biologically younger brains based on their structural integrity.
Methods: A cohort of 153 older adults (aged 61-93) was recruited, with 63 classified as superagers based on superior episodic memory and 90 as typical older adults, of whom 64 were followed up after two years.
J Imaging Inform Med
January 2025
School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.
Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Orthopedic Surgery, Arrowhead Regional Medical Center, Colton, CA, USA.
Rib pathology is uniquely difficult and time-consuming for radiologists to diagnose. AI can reduce radiologist workload and serve as a tool to improve accurate diagnosis. To date, no reviews have been performed synthesizing identification of rib fracture data on AI and its diagnostic performance on X-ray and CT scans of rib fractures and its comparison to physicians.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Leiden University Medical Center (LUMC), Leiden, the Netherlands.
Rising computed tomography (CT) workloads require more efficient image interpretation methods. Digitally reconstructed radiographs (DRRs), generated from CT data, may enhance workflow efficiency by enabling faster radiological assessments. Various techniques exist for generating DRRs.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.
This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!