One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as unmanned vehicles, intelligent houses, and humanoid robots capable of caring for people. In this context, research in the field of machine ethics has become more than a hot topic. Machine ethics focuses on developing ethical mechanisms for artificial agents to be capable of engaging in moral behavior. However, there are still crucial challenges in the development of truly Artificial Moral Agents. This paper aims to show the current status of Artificial Moral Agents by analyzing models proposed over the past two decades. As a result of this review, a taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed. The presented review aims to illustrate (1) the complexity of designing and developing ethical mechanisms for this type of agent, and (2) that there is a long way to go (from a technological perspective) before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11948-019-00151-x | DOI Listing |
Future military conflicts are likely to involve peer or near-peer adversaries in large-scale combat operations, leading to casualty rates not seen since World War II. Casualty volume, combined with anticipated disruptions in medical evacuation, will create resource-limited environments that challenge medical responders to make complex, repetitive triage decisions. Similarly, pandemics, mass casualty incidents, and natural disasters strain civilian health care providers, increasing their risk for exhaustion, burnout, and moral injury.
View Article and Find Full Text PDFTop Cogn Sci
January 2025
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.
Recent theoretical work has argued that moral psychology can be understood through the lens of "resource rational contractualism." The view posits that the best way of making a decision that affects other people is to get everyone together to negotiate under idealized conditions. The outcome of that negotiation is an arrangement (or "contract") that would lead to mutual benefit.
View Article and Find Full Text PDFHPB (Oxford)
December 2024
Department of Advanced & Minimally Invasive Surgery, American Hospital of Tbilisi, 17 Ushangi Chkheidze Street, Tbilisi 0102, Georgia. Electronic address:
Background: Hepato-Pancreato-Biliary (HPB) surgery is a complex specialty and Artificial Intelligence (AI) applications have the potential to improve pre- intra- and postoperative outcomes of HPB surgery. While ethics guidelines have been developed for the use of AI in clinical surgery, the ethical implications and reliability of AI in HPB surgery remain specifically unexplored.
Methods: An online survey was developed by the Innovation Committee of the E-AHPBA to investigate the current perspectives on the ethical principles and trustworthiness of AI in HPB Surgery among E-AHPBA membership.
Cognition
January 2025
Social Brain Sciences Group, Department of Humanities, Social and Political Sciences, ETH Zurich, Zurich, Switzerland. Electronic address:
Throughout history, art creation has been regarded as a uniquely human means to express original ideas, emotions, and experiences. However, as Generative Artificial Intelligence reshapes visual, aesthetic, legal, and economic culture, critical questions arise about the moral and aesthetic implications of AI-generated art. Despite the growing use of AI tools in art, the moral impact of AI involvement in the art creation process remains underexplored.
View Article and Find Full Text PDFJMIR Form Res
January 2025
Department of Physician Assistant Studies, Massachusetts College of Pharmacy and Health Sciences, 179 Longwood Avenue, Boston, MA, 02115, United States, 1 6177322961.
The integration of large language models (LLMs), as seen with the generative pretrained transformers series, into health care education and clinical management represents a transformative potential. The practical use of current LLMs in health care sparks great anticipation for new avenues, yet its embracement also elicits considerable concerns that necessitate careful deliberation. This study aims to evaluate the application of state-of-the-art LLMs in health care education, highlighting the following shortcomings as areas requiring significant and urgent improvements: (1) threats to academic integrity, (2) dissemination of misinformation and risks of automation bias, (3) challenges with information completeness and consistency, (4) inequity of access, (5) risks of algorithmic bias, (6) exhibition of moral instability, (7) technological limitations in plugin tools, and (8) lack of regulatory oversight in addressing legal and ethical challenges.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!