One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society's most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8214712PMC
http://dx.doi.org/10.1007/s10796-021-10156-2DOI Listing

Publication Analysis

Top Keywords

ai-based recommendations
20
cultural values
16
racial gender
12
espoused national
12
national cultural
12
gender bias
8
individuals question
8
question ai-based
8
recommendations perceived
8
ai-based
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!