Search papers, labs, and topics across Lattice.
3
0
8
0
SLMs that seem safe with text inputs can completely fail when the same content is spoken, revealing a critical "speech grounding gap" in current models.
Reward hacking isn't just a bug, it's a feature arising from the fundamental mismatch between complex human goals and the compressed reward signals used to train LLMs.
Offloading memory and computation to a copilot lets a 7B parameter GUI agent outperform larger models on long-horizon tasks, suggesting a path to more efficient and capable GUI automation.