In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like "intention" which may have no natural analog in artificial agents, it may prove difficult to design a "like-for-like" comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

Download full-text PDF

Source
http://dx.doi.org/10.1111/cogs.13315DOI Listing

Publication Analysis

Top Keywords

puzzle evaluating
8
moral cognition
8
artificial agents
8
comparison moral
8
moral behavior
8
moral
6
artificial
5
evaluating moral
4
cognition artificial
4
agents developing
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!