As part of the Special Issue topic "Human-Centered AI at Work: Common Ground in Theories and Methods," we present a perspective article that looks at human-AI teamwork from a team-centered AI perspective, i. e., we highlight important design aspects that the technology needs to fulfill in order to be accepted by humans and to be fully utilized in the role of a team member in teamwork. Drawing from the model of an idealized teamwork process, we discuss the teamwork requirements for successful human-AI teaming in interdependent and complex work domains, including e.g., responsiveness, situation awareness, and flexible decision-making. We emphasize the need for team-centered AI that aligns goals, communication, and decision making with humans, and outline the requirements for such team-centered AI from a technical perspective, such as cognitive competence, reinforcement learning, and semantic communication. In doing so, we highlight the challenges and open questions associated with its implementation that need to be solved in order to enable effective human-AI teaming.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10565103 | PMC |
http://dx.doi.org/10.3389/frai.2023.1252897 | DOI Listing |
Br J Psychol
December 2024
Florida Institute for National Security (FINS), Gainesville, Florida, USA.
Text-based automatic personality recognition (APR) operates at the intersection of artificial intelligence (AI) and psychology to determine the personality of an individual from their text sample. This covert form of personality assessment is key for a variety of online applications that contribute to individual convenience and well-being such as that of chatbots and personal assistants. Despite the availability of good quality data utilizing state-of-the-art AI methods, the reported performance of these recognition systems remains below expectations in comparable areas.
View Article and Find Full Text PDFFront Artif Intell
November 2024
Software Competence Center Hagenberg (SCCH), Hagenberg, Austria.
In this paper, we discuss technologies and approaches based on Knowledge Graphs (KGs) that enable the management of inline human interventions in AI-assisted manufacturing processes in Industry 5.0 under potentially changing conditions in order to maintain or improve the overall system performance. Whereas KG-based systems are commonly based on a static view with their structure fixed at design time, we argue that the dynamic challenge of inline Human-AI (H-AI) collaboration in industrial settings calls for a late shaping design principle.
View Article and Find Full Text PDFIEEE Trans Vis Comput Graph
September 2024
Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration.
View Article and Find Full Text PDFCogn Res Princ Implic
September 2024
Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
With the growing role of artificial intelligence (AI) in our lives, attention is increasingly turning to the way that humans and AI work together. A key aspect of human-AI collaboration is how people integrate judgements or recommendations from machine agents, when they differ from their own judgements. We investigated trust in human-machine teaming using a perceptual judgement task based on the judge-advisor system.
View Article and Find Full Text PDFErgonomics
July 2024
School of Computing, Clemson University, Clemson, South Carolina.
Despite the gains in performance that AI can bring to human-AI teams, they also present them with new challenges, such as the decline in human ability to respond to AI failures as the AI becomes more autonomous. This challenge is particularly dangerous in human-AI teams, where the AI holds a unique role in the team's success. Thus, it is imperative that researchers find solutions for designing AI team-mates that consider their human team-mates' needs in their adaptation logic.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!