Search papers, labs, and topics across Lattice.
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China
5
0
11
0
LLMs can verify code more effectively by focusing on test case utility rather than sheer quantity, achieving a 28.5% higher mutation score with 19.3% fewer tests.
Key contribution not extracted.
By mimicking human visual attention, TraceVision significantly boosts spatial reasoning in vision-language models, outperforming existing methods on trajectory-guided tasks.
Standard multimodal LLMs can perform surprisingly well on dense prediction tasks like segmentation and depth estimation, without needing any task-specific decoder modules.
Forget full attention: a hybrid sparse-linear attention model, MiniCPM-SALA, achieves 3.5x faster inference and supports 1M context length on a single GPU, all while maintaining comparable performance.