Search papers, labs, and topics across Lattice.
This paper investigates the vulnerability of LLMs to safety attacks in low-resource languages, attributing it to a mismatch between language-agnostic semantic understanding and language-specific safety alignment. They identify a "semantic bottleneck" layer where representations are primarily semantic and propose Language-Agnostic Semantic Alignment (LASA) to anchor safety alignment in this bottleneck. LASA significantly reduces attack success rates across various models and languages, demonstrating the importance of language-agnostic semantic understanding for robust safety.
LLM safety crumbles in low-resource languages because alignment is skin-deep; LASA fixes this by injecting safety at the semantic core, slashing attack success by 88%.
Large language models (LLMs) often demonstrate strong safety performance in high-resource languages, yet exhibit severe vulnerabilities when queried in low-resource languages. We attribute this gap to a mismatch between language-agnostic semantic understanding ability and language-dominant safety alignment biased toward high-resource languages. Consistent with this hypothesis, we empirically identify the semantic bottleneck in LLMs, an intermediate layer in which the geometry of model representations is governed primarily by shared semantic content rather than language identity. Building on this observation, we propose Language-Agnostic Semantic Alignment (LASA), which anchors safety alignment directly in semantic bottlenecks. Experiments show that LASA substantially improves safety across all languages: average attack success rate (ASR) drops from 24.7% to 2.8% on LLaMA-3.1-8B-Instruct and remains around 3-4% across Qwen2.5 and Qwen3 Instruct models (7B-32B). Together, our analysis and method offer a representation-level perspective on LLM safety, suggesting that safety alignment requires anchoring safety understanding not in surface text, but in the model's language-agnostic semantic space.