Search papers, labs, and topics across Lattice.
The paper introduces IMPACT-CYCLE, a multi-agent system for claim-level supervisory correction of long-video semantic memory, addressing the challenge of costly error correction in existing multimodal pipelines. It decomposes verification into object-relation correctness, cross-temporal consistency, and global semantic coherence, using role-specialized agents and human arbitration. Experiments on VidOR demonstrate improved VQA performance (0.71 to 0.79) and a 4.8x reduction in human arbitration cost compared to manual annotation.
Correcting errors in long-video understanding doesn't have to be a nightmare: IMPACT-CYCLE slashes human arbitration costs by 4.8x while boosting VQA accuracy by intelligently decomposing the task and focusing human effort where it matters most.
Correcting errors in long-video understanding is disproportionately costly: existing multimodal pipelines produce opaque, end-to-end outputs that expose no intermediate state for inspection, forcing annotators to revisit raw video and reconstruct temporal logic from scratch. The core bottleneck is not generation quality alone, but the absence of a supervisory interface through which human effort can be proportional to the scope of each error. We present IMPACT-CYCLE, a supervisory multi-agent system that reformulates long-video understanding as iterative claim-level maintenance of a shared semantic memory -- a structured, versioned state encoding typed claims, a claim dependency graph, and a provenance log. Role-specialized agents operating under explicit authority contracts decompose verification into local object-relation correctness, cross-temporal consistency, and global semantic coherence, with corrections confined to structurally dependent claims. When automated evidence is insufficient, the system escalates to human arbitration as the supervisory authority with final override rights; dependency-closure re-verification then ensures correction cost remains proportional to error scope. Experiments on VidOR show substantially improved downstream reasoning (VQA: 0.71 to 0.79) and a 4.8x reduction in human arbitration cost, with workload significantly lower than manual annotation. Code will be released at https://github.com/MKong17/IMPACT_CYCLE.