Search papers, labs, and topics across Lattice.
PerceptionComp, a new video benchmark, assesses complex, long-horizon, perception-centric video reasoning by requiring models to integrate multiple temporally separated visual cues and compositional constraints to answer questions. The benchmark includes 1,114 manually annotated questions across 279 videos from diverse domains, demanding skills like semantic recognition, visual correspondence, and temporal/spatial reasoning. Experiments show that even state-of-the-art MLLMs like Gemini-3-Flash struggle on PerceptionComp, achieving only 45.96% accuracy, highlighting a significant gap in current models' ability to perform complex perceptual reasoning over time.
Today's best MLLMs are stumped by PerceptionComp, a new video reasoning benchmark where answering questions requires piecing together visual evidence across time and space.
We introduce PerceptionComp, a manually annotated benchmark for complex, long-horizon, perception-centric video reasoning. PerceptionComp is designed so that no single moment is sufficient: answering each question requires multiple temporally separated pieces of visual evidence and compositional constraints under conjunctive and sequential logic, spanning perceptual subtasks such as objects, attributes, relations, locations, actions, and events, and requiring skills including semantic recognition, visual correspondence, temporal reasoning, and spatial reasoning. The benchmark contains 1,114 highly complex questions on 279 videos from diverse domains including city walk tours, indoor villa tours, video games, and extreme outdoor sports, with 100% manual annotation. Human studies show that PerceptionComp requires substantial test-time thinking and repeated perception steps: participants take much longer than on prior benchmarks, and accuracy drops to near chance (18.97%) when rewatching is disallowed. State-of-the-art MLLMs also perform substantially worse on PerceptionComp than on existing benchmarks: the best model in our evaluation, Gemini-3-Flash, reaches only 45.96% accuracy in the five-choice setting, while open-source models remain below 40%. These results suggest that perception-centric long-horizon video reasoning remains a major bottleneck, and we hope PerceptionComp will help drive progress in perceptual reasoning.