We propose a direct approach to learning sparse kernel classifiers for multi-instance (MI) classification to improve efficiency while maintaining predictive accuracy. The proposed method builds on a convex formulation for MI classification by considering the average score of individual instances for bag-level prediction. In contrast, existing formulations used the maximum score of individual instances in each bag, which leads to nonconvex optimization problems. Based on the convex MI framework, we formulate a sparse kernel learning algorithm by imposing additional constraints on the objective function to enforce the maximum number of expansions allowed in the prediction function. The formulated sparse learning problem for the MI classification is convex with respect to the classifier weights. Therefore, we can employ an effective optimization strategy to solve the optimization problem that involves the joint learning of both the classifier and the expansion vectors. In addition, the proposed formulation can explicitly control the complexity of the prediction model while still maintaining competitive predictive performance. Experimental results on benchmark data sets demonstrate that our proposed approach is effective in building very sparse kernel classifiers while achieving comparable performance to the state-of-the-art MI classifiers.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2013.2254721DOI Listing

Publication Analysis

Top Keywords

sparse kernel
16
kernel classifiers
12
learning sparse
8
classifiers multi-instance
8
multi-instance classification
8
score individual
8
individual instances
8
learning
5
kernel
4
classifiers
4

Similar Publications

Sparse Convolution FPGA Accelerator Based on Multi-Bank Hash Selection.

Micromachines (Basel)

December 2024

Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China.

Reconfigurable processor-based acceleration of deep convolutional neural network (DCNN) algorithms has emerged as a widely adopted technique, with particular attention on sparse neural network acceleration as an active research area. However, many computing devices that claim high computational power still struggle to execute neural network algorithms with optimal efficiency, low latency, and minimal power consumption. Consequently, there remains significant potential for further exploration into improving the efficiency, latency, and power consumption of neural network accelerators across diverse computational scenarios.

View Article and Find Full Text PDF

Mode-informed complex-valued neural processes for matched field processing.

J Acoust Soc Am

January 2025

School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, People's Republic of China.

A complex-valued neural process method, combined with modal depth functions (MDFs) of the ocean waveguide, is proposed to reconstruct the acoustic field. Neural networks are used to describe complex Gaussian processes, modeling the distribution of the acoustic field at different depths. The network parameters are optimized through a meta-learning strategy, preventing overfitting under small sample conditions (sample size equals the number of array elements) and mitigating the slow reconstruction speed of Gaussian processes (GPs), while denoising and interpolating sparsely distributed acoustic field data, generating dense field data for virtual receiver arrays.

View Article and Find Full Text PDF

Temporal logic inference for interpretable fault diagnosis of bearings via sparse and structured neural attention.

ISA Trans

January 2025

State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China. Electronic address:

This paper addresses the critical challenge of interpretability in machine learning methods for machine fault diagnosis by introducing a novel ad hoc interpretable neural network structure called Sparse Temporal Logic Network (STLN). STLN conceptualizes network neurons as logical propositions and constructs formal connections between them using specified logical operators, which can be articulated and understood as a formal language called Weighted Signal Temporal Logic. The network includes a basic word network using wavelet kernels to extract intelligible features, a transformer encoder with sparse and structured neural attention to locate informative signal segments relevant to decision-making, and a logic network to synthesize a coherent language for fault explanation.

View Article and Find Full Text PDF

Sparse kernel -means clustering.

J Appl Stat

June 2024

Graduate School, Department of Urban Big Data Convergence, University of Seoul, Seoul, South Korea.

Clustering is an essential technique that groups similar data points to uncover the underlying structure and features of the data. Although traditional clustering methods such as -means are widely utilized, they have limitations in identifying nonlinear clusters. Thus, alternative techniques, such as kernel -means and spectral clustering, have been developed to address this issue.

View Article and Find Full Text PDF

RNA N$^{6}$-methyladenosine (m$^{6}$A) is a critical epigenetic modification closely related to rice growth, development, and stress response. m$^{6}$A accurate identification, directly related to precision rice breeding and improvement, is fundamental to revealing phenotype regulatory and molecular mechanisms. Faced on rice m$^{6}$A variable-length sequence, to input into the model, the maximum length padding and label encoding usually adapt to obtain the max-length padded sequence for prediction.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!