Search papers, labs, and topics across Lattice.
The paper introduces VRBench, a novel benchmark for evaluating multi-step reasoning in long narrative videos, comprising 960 videos averaging 1.6 hours each, paired with 8,243 multi-step question-answering pairs and 25,106 timestamped reasoning steps. VRBench employs a human-AI collaborative framework to generate coherent reasoning chains and a multi-phase evaluation pipeline to assess models at both outcome and process levels, including a progress-level LLM-guided scoring metric. Evaluations of 12 LLMs and 19 VLMs on VRBench reveal insights into the capabilities and limitations of current models in temporal reasoning and procedural validity.
Current LLMs and VLMs struggle with multi-step reasoning in long videos, often failing to maintain temporal coherence and procedural validity, as revealed by a new benchmark of hour-long narratives.
We present VRBench, the first long narrative video benchmark crafted for evaluating large models'multi-step reasoning capabilities, addressing limitations in existing evaluations that overlook temporal reasoning and procedural validity. It comprises 960 long videos (with an average duration of 1.6 hours), along with 8,243 human-labeled multi-step question-answering pairs and 25,106 reasoning steps with timestamps. These videos are curated via a multi-stage filtering process including expert inter-rater reviewing to prioritize plot coherence. We develop a human-AI collaborative framework that generates coherent reasoning chains, each requiring multiple temporally grounded steps, spanning seven types (e.g., event attribution, implicit inference). VRBench designs a multi-phase evaluation pipeline that assesses models at both the outcome and process levels. Apart from the MCQs for the final results, we propose a progress-level LLM-guided scoring metric to evaluate the quality of the reasoning chain from multiple dimensions comprehensively. Through extensive evaluations of 12 LLMs and 19 VLMs on VRBench, we undertake a thorough analysis and provide valuable insights that advance the field of multi-step reasoning.