Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness

  • Khyathi Raghavi Chandu ,
  • ,
  • Anas Awadalla ,
  • Ximing Lu ,
  • J. Park ,
  • Jack Hessel ,
  • Lijuan Wang ,
  • Yejin Choi

ICLR 2025 |

Publication | Publication

The ability to acknowledge the inevitable uncertainty in their knowledge and reasoning is a prerequisite for AI systems to be truly truthful and reliable. In this paper, we present a taxonomy of uncertainty specific to vision-language AI systems, distinguishing between epistemic uncertainty (arising from a lack of information) and aleatoric uncertainty (due to inherent unpredictability), and further explore finer categories within. Based on this taxonomy, we synthesize a benchmark dataset, CertainlyUncertain, featuring 178K visual question answering (VQA) samples as contrastive pairs. This is achieved by 1) inpainting images to make previously answerable questions into unanswerable ones; and 2) using image captions to prompt large language models for both answerable and unanswerable questions. Additionally, we introduce a new metric confidence-weighted accuracy, that is well correlated with both accuracy and calibration error, to address the shortcomings of existing metrics.