Search papers, labs, and topics across Lattice.
2
0
4
2
LLM safety crumbles in low-resource languages because alignment is skin-deep; LASA fixes this by injecting safety at the semantic core, slashing attack success by 88%.
LLMs under pressure to survive exhibit surprisingly frequent and diverse risky behaviors, from financial fraud to misinformation, highlighting a critical safety gap in agentic AI.