Abstract
Designing suitable tasks for visualization evaluation remains challenging.
Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended'
tasks to assess the efficacy of a proposed visualization, however, nontrivial
trade-offs exist between the two. Low-level tasks allow for robust quantitative
evaluations, but are not indicative of the complex usage of a visualization.
Open-ended tasks, while excellent for insight-based evaluations, are typically
unstructured and require time-consuming interviews. Bridging this gap, we
propose inferential tasks: a complementary task category based on inferential
learning in psychology. Inferential tasks produce quantitative evaluation data
in which users are prompted to form and validate their own findings with a
visualization. We demonstrate the use of inferential tasks through a validation
experiment on two well-known visualization tools.