As part of striving towards fully automatic cardiac functional assessment of echocardiograms, automatic classification of their standard views is essential as a pre-processing stage. The similarity among three of the routinely acquired longitudinal scans: apical two-chamber (A2C), apical four-chamber (A4C) and apical long-axis (ALX), and the noise commonly inherent to these scans - make the classification a challenge. Here we introduce a multi-stage classification algorithm that employs spatio-temporal feature extraction (Cuboid Detector) and supervised dictionary learning (LC-KSVD) approaches to uniquely enhance the automatic recognition and classification accuracy of echocardiograms. The algorithm incorporates both discrimination and labelling information to allow a discriminative and sparse representation of each view. The advantage of the spatio-temporal feature extraction as compared to spatial processing is then validated. A set of 309 clinical clips (103 for each view), were labeled by 2 experts. A subset of 70 clips of each class was used as a training set and the rest as a test set. The recognition accuracies achieved were: 97%, 91% and 97% of A2C, A4C and ALX respectively, with average recognition rate of 95%. Thus, automatic classification of echocardiogram views seems promising, despite the inter-view similarity between the classes and intra-view variability among clips belonging to the same class.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.media.2016.10.007 | DOI Listing |
Comput Biol Med
January 2025
School of Computer Science, Chungbuk National University, Cheongju 28644, Republic of Korea. Electronic address:
The fusion index is a critical metric for quantitatively assessing the transformation of in vitro muscle cells into myotubes in the biological and medical fields. Traditional methods for calculating this index manually involve the labor-intensive counting of numerous muscle cell nuclei in images, which necessitates determining whether each nucleus is located inside or outside the myotubes, leading to significant inter-observer variation. To address these challenges, this study proposes a three-stage process that integrates the strengths of pattern recognition and deep-learning to automatically calculate the fusion index.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Centre of Mechanical Technology and Automation (TEMA), Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal.
To automate the quality control of painted surfaces of heating devices, an automatic defect detection and classification system was developed by combining deflectometry and bright light-based illumination on the image acquisition, deep learning models for the classification of non-defective (OK) and defective (NOK) surfaces that fused dual-modal information at the decision level, and an online network for information dispatching and visualization. Three decision-making algorithms were tested for implementation: a new model built and trained from scratch and transfer learning of pre-trained networks (ResNet-50 and Inception V3). The results revealed that the two illumination modes employed widened the type of defects that could be identified with this system, while maintaining its lower computational complexity by performing multi-modal fusion at the decision level.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816-8005, USA.
Recognizing targets in infra-red images is an important problem for defense and security applications. A deployed network must not only recognize the known classes, but it must also reject any new or objects without confusing them to be one of the known classes. Our goal is to enhance the ability of existing (or pretrained) classifiers to detect and reject unknown classes.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Information Engineering, University of Padova, 35122 Padova, Italy.
Sleep posture is a key factor in assessing sleep quality, especially for individuals with Obstructive Sleep Apnea (OSA), where the sleeping position directly affects breathing patterns: the side position alleviates symptoms, while the supine position exacerbates them. Accurate detection of sleep posture is essential in assessing and improving sleep quality. Automatic sleep posture detection systems, both wearable and non-wearable, have been developed to assess sleep quality.
View Article and Find Full Text PDFSensors (Basel)
January 2025
Department of Environmental Remote Sensing and Geoinformatics, Trier University, Universitätsring 15, 54296 Trier, Germany.
Assessing vines' vigour is essential for vineyard management and automatization of viticulture machines, including shaking adjustments of berry harvesters during grape harvest or leaf pruning applications. To address these problems, based on a standardized growth class assessment, labeled ground truth data of precisely located grapevines were predicted with specifically selected Machine Learning (ML) classifiers (Random Forest Classifier (RFC), Support Vector Machines (SVM)), utilizing multispectral UAV (Unmanned Aerial Vehicle) sensor data. The input features for ML model training comprise spectral, structural, and texture feature types generated from multispectral orthomosaics (spectral features), Digital Terrain and Surface Models (DTM/DSM- structural features), and Gray-Level Co-occurrence Matrix (GLCM) calculations (texture features).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!