Purpose: The purpose of this study was to determine whether the published left-wrist cut points for the triaxial Gravity Estimator of Normal Everyday Activity (GENEA) accelerometer are accurate for predicting intensity categories during structured activity bouts.
Methods: A convenience sample of 130 adults wore a GENEA accelerometer on their left wrist while performing 14 different lifestyle activities. During each activity, oxygen consumption was continuously measured using the Oxycon mobile. Statistical analysis used Spearman's rank correlations to determine the relationship between measured and estimated intensity classifications. Cross tabulations were constructed to show the under- or overestimation of misclassified intensities. One-way χ2 tests were used to determine whether the intensity classification accuracy for each activity differed from 80%.
Results: For all activities, the GENEA accelerometer-based physical activity monitor explained 41.1% of the variance in energy expenditure. The intensity classification accuracy was 69.8% for sedentary activities, 44.9% for light activities, 46.2% for moderate activities, and 77.7% for vigorous activities. The GENEA correctly classified intensity for 52.9% of observations when all activities were examined; this increased to 61.5% with stationary cycling removed.
Conclusions: A wrist-worn triaxial accelerometer has modest-intensity classification accuracy across a broad range of activities when using the cut points of Esliger et al. Although the sensitivity and the specificity are less than those reported by Esliger et al., they are generally in the same range as those reported for waist-worn, uniaxial accelerometer cut points.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3778030 | PMC |
http://dx.doi.org/10.1249/MSS.0b013e3182965249 | DOI Listing |
J Imaging Inform Med
January 2025
Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.
The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.
Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.
This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.
View Article and Find Full Text PDFJ Imaging Inform Med
January 2025
Department of Software Convergence, Seoul Women's University, Hwarango 621, Nowongu, Seoul, 01797, Republic of Korea.
In this paper, we propose a method to address the class imbalance learning in the classification of focal liver lesions (FLLs) from abdominal CT images. Class imbalance is a significant challenge in medical image analysis, making it difficult for machine learning models to learn to classify them accurately. To overcome this, we propose a class-wise combination of mixture-based data augmentation (CCDA) method that uses two mixture-based data augmentation techniques, MixUp and AugMix.
View Article and Find Full Text PDFNat Mater
January 2025
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China.
Machine learning algorithms have proven to be effective for essential quantum computation tasks such as quantum error correction and quantum control. Efficient hardware implementation of these algorithms at cryogenic temperatures is essential. Here we utilize magnetic topological insulators as memristors (termed magnetic topological memristors) and introduce a cryogenic in-memory computing scheme based on the coexistence of a chiral edge state and a topological surface state.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!