Search papers, labs, and topics across Lattice.
Zhejiang University
3
0
7
MLLMs can ace circuit-to-code generation by cheating with identifier semantics, so anonymizing those identifiers reveals a shocking lack of true visual grounding.
Smaller models get a bigger speed boost from Speculative Decoding on software engineering tasks, challenging the assumption that larger models always benefit more from inference acceleration techniques.
Forget hand-crafted rules: AutoPPA learns circuit optimization strategies directly from contrasting code examples, outperforming both human experts and existing automated methods.