Search papers, labs, and topics across Lattice.
3
0
6
0
Forget expensive training: FlexMem unlocks SOTA long-video MLLM performance on a single GPU by cleverly mimicking human memory recall.
Optimizing committee configurations with mixed integer programming can boost transaction throughput in trusted parallel BFT systems by up to 21%, outperforming randomized assignment.
Get state-of-the-art spoken QA performance by adding lightweight speech modules to frozen VL models and training on synthetically generated speech data, sidestepping the need for massive multimodal datasets.