Search papers, labs, and topics across Lattice.
Institute of Automation, Chinese Academy of Sciences
1
0
3
26
MLLMs can now reason about streaming video with significantly improved long-range memory and reduced output token length thanks to a novel "Think While Watching" framework.