Search papers, labs, and topics across Lattice.
The paper introduces ICE, a framework for statistically evaluating the faithfulness of LLM explanations by comparing them to randomized baselines under multiple intervention operators. They evaluate 7 LLMs across diverse tasks and languages, finding that faithfulness is highly dependent on the intervention operator used, with deletion interventions often inflating faithfulness estimates on short text. Their analysis reveals anti-faithfulness in a significant number of configurations and a lack of correlation between faithfulness and human plausibility, highlighting the limitations of current explanation methods.
LLM explanation faithfulness varies wildly depending on how you test it, and might even be *anti*-faithful, so stop relying on single-intervention benchmarks.
Evaluating whether explanations faithfully reflect a model's reasoning remains an open problem. Existing benchmarks use single interventions without statistical testing, making it impossible to distinguish genuine faithfulness from chance-level performance. We introduce ICE (Intervention-Consistent Explanation), a framework that compares explanations against matched random baselines via randomization tests under multiple intervention operators, yielding win rates with confidence intervals. Evaluating 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, we find that faithfulness is operator-dependent: operator gaps reach up to 44 percentage points, with deletion typically inflating estimates on short text but the pattern reversing on long text, suggesting that faithfulness should be interpreted comparatively across intervention operators rather than as a single score. Randomized baselines reveal anti-faithfulness in one-third of configurations, and faithfulness shows zero correlation with human plausibility (|r|<0.04). Multilingual evaluation reveals dramatic model-language interactions not explained by tokenization alone. We release the ICE framework and ICEBench benchmark.