The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the philosophical relevance of the proposed condition, we consider two prominent accounts of discrimination in the recent literature, by Lippert-Rasmussen and Hellman respectively, that do not logically imply our condition and show that they face important objections.
View Article and Find Full Text PDFBackground: Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees' values in their design and deployment has been widely overlooked.
View Article and Find Full Text PDFEthics Inf Technol
October 2020
In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model.
View Article and Find Full Text PDFIn his recent article 'Limits of trust in medical AI,' Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human-human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents.
View Article and Find Full Text PDF