VeriTrail: Closed-Domain Hallucination Detection with Traceability

Even when instructed to adhere to source material, Language Models often generate unsubstantiated content – a phenomenon known as “closed-domain hallucination.” This risk is amplified in processes with multiple generative steps (MGS), compared to processes with a single generative step (SGS). However, due to the greater complexity of MGS processes, we argue that detecting hallucinations in their final outputs is necessary but not sufficient: it is equally important to trace where hallucinated content was likely introduced and how faithful content may have been derived from the source through intermediate outputs. To address this need, we present VeriTrail, the first closed-domain hallucination detection method designed to provide traceability for both MGS and SGS processes. We also introduce the first datasets to include all intermediate outputs as well as human annotations of final outputs’ faithfulness for their respective MGS processes. We demonstrate that VeriTrail outperforms baseline methods on both datasets.

VeriTrail: Detect hallucination and trace provenance in AI workflows

Dasha Metropolitansky, Research Data Scientist, Microsoft Research Special Projects, introduces VeriTrail, a new method for closed-domain hallucination detection in multi-step AI workflows. Unlike prior methods, VeriTrail provides traceability: it identifies where hallucinated content was likely introduced, and it establishes the provenance of faithful content by tracing a path to the source text. VeriTrail also outperforms baseline methods in hallucination detection. The combination of traceability and effective hallucination detection makes VeriTrail a powerful tool for auditing the integrity of content generated by language models.