Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TIP.2016.2520368 | DOI Listing |
J Neural Eng
December 2024
West China Hospital of Sichuan University, No.37 Guoxue Alley, Wuhou District, Chengdu City, Sichuan Province, Chengdu, Sichuan, 610041, CHINA.
Objective: Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a Transformer with modified locally linear embedding and sliding window convolution for EEG decoding.
View Article and Find Full Text PDFIn brain-computer interfaces (BCIs) based on motor imagery (MI), reducing calibration time is gradually becoming an urgent issue in practical applications. Recently, transfer learning (TL) has demonstrated its effectiveness in reducing calibration time in MI-BCI. However, the different data distribution of subjects greatly affects the application effect of TL in MI-BCI.
View Article and Find Full Text PDFCogn Neurodyn
December 2024
School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072 China.
The integration and interaction of cross-modal senses in brain neural networks can facilitate high-level cognitive functionalities. In this work, we proposed a bioinspired multisensory integration neural network (MINN) that integrates visual and audio senses for recognizing multimodal information across different sensory modalities. This deep learning-based model incorporates a cascading framework of parallel convolutional neural networks (CNNs) for extracting intrinsic features from visual and audio inputs, and a recurrent neural network (RNN) for multimodal information integration and interaction.
View Article and Find Full Text PDFNat Comput Sci
December 2024
Computational Biology and Bioinformatics Program, Yale University, New Haven, CT, USA.
In single-cell sequencing analysis, several computational methods have been developed to map the cellular state space, but little has been done to map or create embeddings of the gene space. Here we formulate the gene embedding problem, design tasks with simulated single-cell data to evaluate representations, and establish ten relevant baselines. We then present a graph signal processing approach, called gene signal pattern analysis (GSPA), that learns rich gene representations from single-cell data using a dictionary of diffusion wavelets on the cell-cell graph.
View Article and Find Full Text PDFIEEE Trans Inf Theory
December 2024
Department of CISE, University of Florida, Gainesville, FL 32611 USA.
Distributional approximation is a fundamental problem in machine learning with numerous applications across all fields of science and engineering and beyond. The key challenge in most approximation methods is the need to tackle the intractable normalization constant present in the candidate distributions used to model the data. This intractability is especially common for distributions of manifold-valued random variables such as rotation matrices, orthogonal matrices etc.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!