Search papers, labs, and topics across Lattice.
VideoZeroBench, a new hierarchical benchmark, is introduced to evaluate video MLLMs on long-video question answering with rigorous spatio-temporal evidence verification. The benchmark includes 500 manually annotated questions across 13 domains, paired with temporal intervals and spatial bounding boxes as evidence, and uses a five-level evaluation protocol to disentangle answering generation, temporal grounding, and spatial grounding. Experiments reveal that even state-of-the-art models like Gemini-3-Pro struggle to provide correct answers with accurate spatio-temporal localization, highlighting a significant gap between surface-level answer correctness and genuine evidence-based reasoning.
Despite impressive headline scores, today's best video MLLMs can't reliably ground their answers in space and time, achieving <1% accuracy when required to identify the spatio-temporal evidence supporting their predictions.
Recent video multimodal large language models achieve impressive results across various benchmarks. However, current evaluations suffer from two critical limitations: (1) inflated scores can mask deficiencies in fine-grained visual understanding and reasoning, and (2) answer correctness is often measured without verifying whether models identify the precise spatio-temporal evidence supporting their predictions. To address this, we present VideoZeroBench, a hierarchical benchmark designed for challenging long-video question answering that rigorously verifies spatio-temporal evidence. It comprises 500 manually annotated questions across 13 domains, paired with temporal intervals and spatial bounding boxes as evidence. To disentangle answering generation, temporal grounding, and spatial grounding, we introduce a five-level evaluation protocol that progressively tightens evidence requirements. Experiments show that even Gemini-3-Pro correctly answers fewer than 17% of questions under the standard end-to-end QA setting (Level-3). When grounding constraints are imposed, performance drops sharply: No model exceeds 1% accuracy when both correct answering and accurate spatio-temporal localization are required (Level-5), with most failing to achieve any correct grounded predictions. These results expose a significant gap between surface-level answer correctness and genuine evidence-based reasoning, revealing that grounded video understanding remains a bottleneck for long-video QA. We further analyze performance across minimal evidence spans, atomic abilities, and inference paradigms, providing insights for future research in grounded video reasoning. The benchmark and code will be made publicly available.