Explainable artificial intelligence aims to bring transparency to artificial intelligence (AI) systems by translating, simplifying, and visualizing its decisions. While society remains skeptical about AI systems, studies show that transparent and explainable AI systems can help improve the Human-AI trust relationship. This manuscript presents two studies that assess three AI decision visualization attribution models that manipulate morphological clarity (MC) and two information presentation-order methods to determine each visualization's impact on the Human-AI trust relationship through increased confidence and cognitive fit (CF).
View Article and Find Full Text PDFWe report results of a study that utilizes a BCI to drive an interactive interface countermeasure that allows users to self-regulate sustained attention while performing an ecologically valid, long-duration business logistics task. An engagement index derived from EEG signals was used to drive the BCI while fNIRS measured hemodynamic activity for the duration of the task. Participants ( = 30) were split into three groups (1) no countermeasures (NOCM), (2) continuous countermeasures (CCM), and (3) event synchronized, level-dependent countermeasures (ECM).
View Article and Find Full Text PDF