Search papers, labs, and topics across Lattice.
3
12
7
4
LLM safety crumbles in low-resource languages because alignment is skin-deep; LASA fixes this by injecting safety at the semantic core, slashing attack success by 88%.
Multimodal models can "see" the image but still fail at reasoning because the visual input distracts the routing mechanism from activating the right experts.
LLMs can move beyond simple refusals to actively guide vulnerable users towards safe outcomes, achieving state-of-the-art safety and robustness against jailbreaks.