Search papers, labs, and topics across Lattice.
5
0
12
0
LLMs can now reliably generate synthesizable Verilog for complex, hierarchical hardware designs, thanks to a knowledge graph that enforces interface consistency and dependency correctness.
Speculative decoding can be sped up by >2x without sacrificing accuracy by rescuing previously rejected tokens that are semantically valid but lexically different.
Current LLM watermarks are surprisingly easy to spoof: a small 4B model, trained on just 100 examples using reinforcement learning, can fool them over 60% of the time.
Forget tedious fine-tuning: leveraging molecule identifiers as visual prompts unlocks surprisingly powerful zero-shot chemical reaction diagram parsing in VLMs.
By grounding reflection in the visual artifacts of presentation slides, DeepPresenter enables agents to iteratively refine presentations in a way that internal reasoning traces alone cannot.