The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1080/21507740.2020.1740354 | DOI Listing |
Implement Sci
February 2024
Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany.
Background: Clinical decision support systems (CDSSs) have the potential to improve quality of care, patient safety, and efficiency because of their ability to perform medical tasks in a more data-driven, evidence-based, and semi-autonomous way. However, CDSSs may also affect the professional identity of health professionals. Some professionals might experience these systems as a threat to their professional identity, as CDSSs could partially substitute clinical competencies, autonomy, or control over the care process.
View Article and Find Full Text PDFFront Psychol
March 2023
AdmEthics - Research Group in Ethics, Virtues, and Moral Dilemmas in Administration, Administration Graduate Program of the Administrative and Socioeconomic Sciences College, Santa Catarina State University, Florianópolis, Brazil.
Artificial intelligence (AI) advancements are changing people's lives in ways never imagined before. We argue that ethics used to be put in perspective by seeing technology as an instrument during the first machine age. However, the second machine age is already a reality, and the changes brought by AI are reshaping how people interact and flourish.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2022
Cochlear Implant Processing Laboratory-Center for Robust Speech Systems (CRSS-CILab), University of Texas at Dallas, Richardson, Texas 75080, USA.
Naturalistic sounds encode salient acoustic content that provides situational context or subject/system properties essential for acoustic awareness, autonomy, safety, and improved quality of life for individuals with sensorineural hearing loss. Cochlear implants (CIs) are an assistive hearing device that restores auditory function in hearing impaired individuals. Most CI research advancements have focused on improving speech recognition in noisy, reverberant, or time-varying diverse environments.
View Article and Find Full Text PDFAJOB Neurosci
October 2020
Berman Institute of Bioethics, Johns Hopkins University.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!