AI Article Synopsis

  • The ethics of robots and AI often focus on autonomously intelligent systems, assuming they require true moral programming to avoid harm.
  • Researchers argue that this focus is misguided, as current robots are "semi-autonomous" with limited decision-making capabilities.
  • The paper proposes a new approach to AI ethics centered on obligations toward semi-autonomous agents, emphasizing the need to protect human interests without pursuing fully autonomous AI.

Article Abstract

The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.

Download full-text PDF

Source
http://dx.doi.org/10.1080/21507740.2020.1740354DOI Listing

Publication Analysis

Top Keywords

human-level autonomy
12
artificial intelligence
8
ethics semi-autonomous
8
semi-autonomous agents
8
nonhuman animals
8
ethics
5
intelligence service
4
service human
4
human pragmatic
4
pragmatic steps
4

Similar Publications

Background: Clinical decision support systems (CDSSs) have the potential to improve quality of care, patient safety, and efficiency because of their ability to perform medical tasks in a more data-driven, evidence-based, and semi-autonomous way. However, CDSSs may also affect the professional identity of health professionals. Some professionals might experience these systems as a threat to their professional identity, as CDSSs could partially substitute clinical competencies, autonomy, or control over the care process.

View Article and Find Full Text PDF

Ethical content in artificial intelligence systems: A demand explained in three critical points.

Front Psychol

March 2023

AdmEthics - Research Group in Ethics, Virtues, and Moral Dilemmas in Administration, Administration Graduate Program of the Administrative and Socioeconomic Sciences College, Santa Catarina State University, Florianópolis, Brazil.

Artificial intelligence (AI) advancements are changing people's lives in ways never imagined before. We argue that ethics used to be put in perspective by seeing technology as an instrument during the first machine age. However, the second machine age is already a reality, and the changes brought by AI are reshaping how people interact and flourish.

View Article and Find Full Text PDF

Naturalistic sounds encode salient acoustic content that provides situational context or subject/system properties essential for acoustic awareness, autonomy, safety, and improved quality of life for individuals with sensorineural hearing loss. Cochlear implants (CIs) are an assistive hearing device that restores auditory function in hearing impaired individuals. Most CI research advancements have focused on improving speech recognition in noisy, reverberant, or time-varying diverse environments.

View Article and Find Full Text PDF
Article Synopsis
  • The ethics of robots and AI often focus on autonomously intelligent systems, assuming they require true moral programming to avoid harm.
  • Researchers argue that this focus is misguided, as current robots are "semi-autonomous" with limited decision-making capabilities.
  • The paper proposes a new approach to AI ethics centered on obligations toward semi-autonomous agents, emphasizing the need to protect human interests without pursuing fully autonomous AI.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!