Search papers, labs, and topics across Lattice.
2
0
5
0
SLMs that seem safe with text inputs can completely fail when the same content is spoken, revealing a critical "speech grounding gap" in current models.
VLA models can ace the task but still trigger unsafe outcomes, exposing a critical gap between action execution and semantic understanding.