Recommender systems are highly vulnerable to shilling attacks, both by individuals and groups. Attackers who introduce biased ratings in order to affect recommendations, have been shown to negatively affect collaborative filtering (CF) algorithms. Previous research focuses only on the differences between genuine profiles and attack profiles, ignoring the group characteristics in attack profiles. In this paper, we study the use of statistical metrics to detect rating patterns of attackers and group characteristics in attack profiles. Another question is that most existing detecting methods are model specific. Two metrics, Rating Deviation from Mean Agreement (RDMA) and Degree of Similarity with Top Neighbors (DegSim), are used for analyzing rating patterns between malicious profiles and genuine profiles in attack models. Building upon this, we also propose and evaluate a detection structure called RD-TIA for detecting shilling attacks in recommender systems using a statistical approach. In order to detect more complicated attack models, we propose a novel metric called DegSim' based on DegSim. The experimental results show that our detection model based on target item analysis is an effective approach for detecting shilling attacks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4519300 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130968 | PLOS |
PeerJ Comput Sci
June 2024
Department of Computer Technologies, Eskisehir Technical University, Eskişehir, Turkey.
The topic of privacy-preserving collaborative filtering is gaining more and more attention. Nevertheless, privacy-preserving collaborative filtering techniques are vulnerable to shilling or profile injection assaults. Hence, it is crucial to identify counterfeit profiles in order to achieve total success.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
June 2022
Due to the pivotal role of recommender systems (RS) in guiding customers toward the purchase, there is a natural motivation for unscrupulous parties to spoof RS for profits. In this article, we study shilling attacks where an adversarial party injects a number of fake user profiles for improper purposes. Conventional Shilling Attack approaches lack attack transferability (i.
View Article and Find Full Text PDFMath Biosci Eng
May 2022
School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China.
Organized malicious shilling attackers influence the output of the collaborative filtering recommendation systems by inserting fake users into the rating matrix within the database. The existence of shilling attack poses a serious risk to the stability of the system. To counter this specific security threat, many attack detection methods are proposed.
View Article and Find Full Text PDFMath Biosci Eng
December 2019
Tianjin Key Laboratory of Intelligence Computing and Novel Software Technology, Tianjin University of Technology, Tianjin 300384, China; Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin University of Technology, Tianjin 300384, China.
Collaborative filtering has been widely used in recommendation systems to recommend items that users might like. However, collaborative filtering based recommendation systems are vulnerable to shilling attacks. Malicious users tend to increase or decrease the recommended frequency of target items by injecting fake profiles.
View Article and Find Full Text PDFPLoS One
August 2018
School of Big Data & Software Engineering, Chongqing University, Chongqing, China, 40044.
Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!