To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an α- β -divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with α- β -divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TCYB.2020.3026425DOI Listing

Publication Analysis

Top Keywords

highly accurate
8
hids matrix
8
latent factor
8
learning objective
8
fast convergence
8
α-β-divergence-generalized recommender
4
recommender highly
4
accurate predictions
4
predictions missing
4
missing user
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!