AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term-the attributability sense. More specifically, relying on work by Nomy Arpaly and Timothy Schroeder (, OUP 2014), we propose that the behavior of these systems can manifest their 'quality of will' and thus be regarded as something they can be blameworthy for.
View Article and Find Full Text PDFThe Top-Down Argument for the ability to do otherwise aims at establishing that humans can do otherwise in the sense that is relevant for debates about free will. It consists of two premises: first, we always need to answer the question of whether some phenomenon (such as the ability to do otherwise) exists by consulting our best scientific theories of the domain at issue. Second, our best scientific theories of human action presuppose that humans can do otherwise.
View Article and Find Full Text PDFMany philosophers characterize a particularly important sense of free will and responsibility by referring to basically deserved blame. But what is basically deserved blame? The aim of this paper is to identify the appraisal entailed by basic desert claims. It presents three desiderata for an account of desert appraisals and it argues that important recent theories fail to meet them.
View Article and Find Full Text PDF