AI Article Synopsis

  • The study aimed to show that anthropomorphism can enhance trust in automation only when it provides useful, contextually relevant information.
  • While participants found the anthropomorphic features of the system appealing, these features did not significantly impact their overall trust and confidence in its advice.
  • Effective communication through aspects like vocal inflections was crucial for improving trust, suggesting that designers should focus on the informational value of anthropomorphic elements rather than just their appearance.

Article Abstract

Objective: The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation.

Background: Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures.

Method: Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM (anthropomorphic avatar vs. camera eye) and voice (monotone vs. meaningless vs. meaningful), with the inflections communicating contextually useful information about automated advice regarding certainty and uncertainty.

Results: SAM appearance was rated as more anthropomorphic than camera , and and inflections were both rated more anthropomorphic than . However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that inflections yielded better outcomes on these trust measures than and inflections.

Conclusion: Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance.

Application: Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457490PMC
http://dx.doi.org/10.1177/00187208231218156DOI Listing

Publication Analysis

Top Keywords

human-automation trust
20
trust calibration
12
trust
11
trust expectation
8
expectation model
8
model hatem
8
calibration confidence
8
rated anthropomorphic
8
anthropomorphism
6
human-automation
5

Similar Publications

Trust and system reliability can influence a user's dependence on automated systems. This study aimed to investigate how increases and decreases in automation reliability affect users' trust in these systems and how these changes in trust are associated with users' dependence on the system. Participants completed a color identification task with the help of an automated aid, where the reliability of this aid either increased from 50% to 100% or decreased from 100% to 50% as the task progressed, depending on which group the participants were assigned to.

View Article and Find Full Text PDF
Article Synopsis
  • The study aimed to evaluate how well active inference models predict when drivers take over from automated vehicles and how these models relate to cognitive fatigue, trust, and situation awareness.
  • Using a driving simulation, researchers developed a model that accurately predicted takeover times, finding that higher cognitive fatigue correlated with more uncertainty in taking control, while better situation awareness was linked to improved understanding of the driving environment.
  • The findings support previous theories on trust in automation and indicate that active inference models can enhance the design and safety of automated driving systems by integrating human cognitive factors.
View Article and Find Full Text PDF

Increased automation transparency can improve the accuracy of automation use but can lead to increased bias towards agreeing with advice. Information about the automation's confidence in its advice may also increase the predictability of automation errors. We examined the effects of providing automation transparency, automation confidence information, and their potential interacting effect on the accuracy of automation use and other outcomes.

View Article and Find Full Text PDF
Article Synopsis
  • Radiotherapy treatment planning is shifting towards more automation, similar to the changes seen in the aviation industry, raising concerns about human roles and risks within these automated systems.
  • A working group at the ESTRO Physics Workshop 2023 suggested a framework based on aviation insights, outlining different levels of automation in radiotherapy and their impact on human involvement.
  • Key risks of this automation include complacency and data overload, which necessitate strategies like checklists and proper training to ensure effective human-automation collaboration while maintaining the critical need for human oversight in complex clinical scenarios.*
View Article and Find Full Text PDF

Objective: This study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior.

Background: Human trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!