Search papers, labs, and topics across Lattice.
Shanghai Jiao Tong University
3
0
8
7
LLMs can now scale depth more effectively: a new attention mechanism recovers diluted features in deeper layers, boosting performance with negligible overhead.
Multi-robot coverage can now handle multiple sensory demands simultaneously, with provable guarantees on performance even when those demands are initially unknown.
Ditch the slow, iterative zooming during MLLM inference: Region-to-Image Distillation lets you bake those agentic zooming benefits directly into a single forward pass.