In this article, we discuss challenges and strategies for evaluating natural language interfaces (NLIs) for data visualization. Through an examination of prior studies and reflecting on own experiences in evaluating visualization NLIs, we highlight benefits and considerations of three task framing strategies: Jeopardy-style facts, open-ended tasks, and target replication tasks. We hope the discussions in this article can guide future researchers working on visualization NLIs and help them avoid common challenges and pitfalls when evaluating these systems. Finally, to motivate future research, we highlight topics that call for further investigation including development of new evaluation metrics, and considering the type of natural language input (spoken versus typed), among others.

Download full-text PDF

Source
http://dx.doi.org/10.1109/MCG.2020.2986902DOI Listing

Publication Analysis

Top Keywords

natural language
12
strategies evaluating
8
evaluating natural
8
language interfaces
8
data visualization
8
visualization nlis
8
say? strategies
4
evaluating
4
interfaces data
4
visualization
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!