Search papers, labs, and topics across Lattice.
The paper introduces Uni-SafeBench, a new benchmark to evaluate the holistic safety of Unified Multimodal Large Models (UMLMs) across understanding and generation tasks. They find that unifying architectures degrades the inherent safety of the underlying LLM, and open-source UMLMs are significantly less safe than specialized models. The benchmark includes six safety categories across seven task types, along with Uni-Judger, a framework to decouple contextual and intrinsic safety.
Unifying multimodal AI architectures doesn't just boost performance; it also dramatically degrades safety, especially in open-source models.
Unified Multimodal Large Models (UMLMs) integrate understanding and generation capabilities within a single architecture. While this architectural unification, driven by the deep fusion of multimodal features, enhances model performance, it also introduces important yet underexplored safety challenges. Existing safety benchmarks predominantly focus on isolated understanding or generation tasks, failing to evaluate the holistic safety of UMLMs when handling diverse tasks under a unified framework. To address this, we introduce Uni-SafeBench, a comprehensive benchmark featuring a taxonomy of six major safety categories across seven task types. To ensure rigorous assessment, we develop Uni-Judger, a framework that effectively decouples contextual safety from intrinsic safety. Based on comprehensive evaluations across Uni-SafeBench, we uncover that while the unification process enhances model capabilities, it significantly degrades the inherent safety of the underlying LLM. Furthermore, open-source UMLMs exhibit much lower safety performance than multimodal large models specialized for either generation or understanding tasks. We open-source all resources to systematically expose these risks and foster safer AGI development.