The progression of deep learning and the widespread adoption of sensors have facilitated automatic multi-view fusion (MVF) about the cardiovascular system (CVS) signals. However, prevalent MVF model architecture often amalgamates CVS signals from the same temporal step but different views into a unified representation, disregarding the asynchronous nature of cardiovascular events and the inherent heterogeneity across views, leading to catastrophic view confusion. Efficient training strategies specifically tailored for MVF models to attain comprehensive representations need simultaneous consideration. Crucially, real-world data frequently arrives with incomplete views, an aspect rarely noticed by researchers. Thus, the View-Centric Transformer (VCT) and Multitask Masked Autoencoder (M2AE) are specifically designed to emphasize the centrality of each view and harness unlabeled data to achieve superior fused representations. Additionally, we systematically define the missing-view problem for the first time and introduce prompt techniques to aid pretrained MVF models in flexibly adapting to various missing-view scenarios. Rigorous experiments involving atrial fibrillation detection, blood pressure estimation, and sleep staging-typical health monitoring tasks-demonstrate the remarkable advantage of our method in MVF compared to prevailing methodologies. Notably, the prompt technique requires finetuning <3 % of the entire model's data, substantially fortifying the model's resilience to view missing while circumventing the need for complete retraining. The results demonstrate the effectiveness of our approaches, highlighting their potential for practical applications in cardiovascular health monitoring. Codes and models are released at URL.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.106760 | DOI Listing |
Health Inf Sci Syst
December 2025
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, No.727 Jingming South Road, Kunming, 650504 Yunnan China.
For diagnosing mental health conditions and assessing sleep quality, the classification of sleep stages is essential. Although deep learning-based methods are effective in this field, they often fail to capture sufficient features or adequately synthesize information from various sources. For the purpose of improving the accuracy of sleep stage classification, our methodology includes extracting a diverse array of features from polysomnography signals, along with their transformed graph and time-frequency representations.
View Article and Find Full Text PDFSensors (Basel)
December 2024
College of Intelligent Manufacturing and Industrial Modernization, Xinjiang University, Urumqi 830017, China.
This paper addresses the challenges of low accuracy and long transfer learning time in small-sample bearing fault diagnosis, which are often caused by limited samples, high noise levels, and poor feature extraction. We propose a method that combines an improved capsule network with a Siamese neural network. Multi-view data partitioning is used to enrich data diversity, and Markov transformation converts one-dimensional vibration signals into two-dimensional images, enhancing the visualization of signal features.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Electrical Engineering, Center for Innovative Research on Aging Society (CIRAS), Advanced Institute of Manufacturing with High-Tech Innovations (AIM-HI), National Chung Cheng University, Chia-Yi 621, Taiwan.
In computer vision, accurately estimating a 3D human skeleton from a single RGB image remains a challenging task. Inspired by the advantages of multi-view approaches, we propose a method of predicting enhanced 2D skeletons (specifically, predicting the joints' relative depths) from multiple virtual viewpoints based on a single real-view image. By fusing these virtual-viewpoint skeletons, we can then estimate the final 3D human skeleton more accurately.
View Article and Find Full Text PDFFront Neurosci
December 2024
Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
Objective: High Angular Resolution Diffusion Imaging (HARDI) models have emerged as a valuable tool for investigating microstructure with a higher degree of detail than standard diffusion Magnetic Resonance Imaging (dMRI). In this study, we explored the potential of multiple advanced microstructural diffusion models for investigating preterm birth in order to identify non-invasive markers of altered white matter development.
Approach: Rather than focusing on a single MRI modality, we studied on a compound of HARDI techniques in 46 preterm babies studied on a 3T scanner at term-equivalent age and in 23 control neonates born at term.
Neural Netw
December 2024
College of Automation, Chongqing University of Posts and Telecommunications, Nan'an District, 400065, Chongqing, China. Electronic address:
Multi-view clustering can better handle high-dimensional data by combining information from multiple views, which is important in big data mining. However, the existing models which simply perform feature fusion after feature extraction for individual views, mostly fails to capture the holistic attribute information of multi-view data due to ignoring the significant disparities among views, which seriously affects the performance of multi-view clustering. In this paper, inspired by the attention mechanism, an approach called Multi-View Fusion Clustering with Attentive Contrastive Learning (MFC-ACL) is proposed to tackle these issues.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!