Our online lives generate a wealth of behavioral records--which are stored and leveraged by technology platforms. These data can be used to create value for users by personalizing services. At the same time, however, it also poses a threat to people's privacy by offering a highly intimate window into their private traits (e.g., their personality, political ideology, sexual orientation). We explore the concept of : allowing users to hide parts of their digital footprints from predictive algorithms, to prevent unwanted inferences. This article addresses two open questions: (i) can cloaking be effective in the longer term, as users continue to generate new digital footprints? And (ii) what is the potential impact of cloaking on the accuracy of inferences? We introduce a novel strategy focused on cloaking "metafeatures" and compare its efficacy against just cloaking the raw footprints. The main findings are (i) while cloaking effectiveness does indeed diminish over time, using metafeatures slows the degradation; (ii) there is a tradeoff between privacy and personalization: cloaking undesired inferences also can inhibit desirable inferences. Furthermore, the metafeature strategy-which yields more stable cloaking-also incurs a larger reduction in desirable inferences.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1089/big.2024.0036 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!