Search papers, labs, and topics across Lattice.
Institute of Computing Technology, Chinese Academy of Science
2
0
6
3
LLMs' code fixes often break what wasn't broken, but a new training scheme that rewards minimal edits can boost repair precision by 31%.
Achieve up to 77.5% reduction in semantic alert delay and 98.33% visual evidence delivery within 0.5s by intelligently cascading small and large models and adaptively transmitting data in edge-cloud MLLM systems.