Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.

Front Artif Intell

Escuela de Ingeniería en Ciberseguridad, FICA, Universidad de Las Américas, Quito, Ecuador.

Published: September 2024

In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769PMC
http://dx.doi.org/10.3389/frai.2024.1410790DOI Listing

Publication Analysis

Top Keywords

recommendation systems
8
explainability techniques
8
precision recommendations
8
transparency precision
4
precision age
4
age evaluation
4
evaluation explainability-enhanced
4
recommendation
4
explainability-enhanced recommendation
4
systems
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!