Search papers, labs, and topics across Lattice.
3
0
5
0
VideoLLMs can now think 15x faster while watching, thanks to a novel streaming paradigm that interleaves perception and reasoning.
By jointly training a keyframe sampler with an MLLM, MSJoE achieves state-of-the-art accuracy in long-form video understanding while significantly reducing computational cost.
Unleashing powerful reasoning in OLLMs doesn't require expensive training data or compute – just clever guidance from existing Large Reasoning Models.