Search papers, labs, and topics across Lattice.
The paper introduces MM-SafetyBench++, a new benchmark for evaluating contextual safety in MLLMs by creating paired safe/unsafe image-text examples with subtle intent differences. They then propose EchoSafe, a training-free inference-time framework that uses a self-reflective memory bank to incorporate safety insights from past interactions into current prompts. Experiments show EchoSafe improves contextual safety on various multi-modal benchmarks, demonstrating the effectiveness of memory-augmented inference for evolving safety behaviors.
MLLMs can learn to be safer at inference time, without any additional training, by remembering and reasoning about past safety failures.
Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability to safety risks remains a pressing concern. While prior research primarily focuses on jailbreak defenses that detect and refuse explicitly unsafe inputs, such approaches often overlook contextual safety, which requires models to distinguish subtle contextual differences between scenarios that may appear similar but diverge significantly in safety intent. In this work, we present MM-SafetyBench++, a carefully curated benchmark designed for contextual safety evaluation. Specifically, for each unsafe image-text pair, we construct a corresponding safe counterpart through minimal modifications that flip the user intent while preserving the underlying contextual meaning, enabling controlled evaluation of whether models can adapt their safety behaviors based on contextual understanding. Further, we introduce EchoSafe, a training-free framework that maintains a self-reflective memory bank to accumulate and retrieve safety insights from prior interactions. By integrating relevant past experiences into current prompts, EchoSafe enables context-aware reasoning and continual evolution of safety behavior during inference. Extensive experiments on various multi-modal safety benchmarks demonstrate that EchoSafe consistently achieves superior performance, establishing a strong baseline for advancing contextual safety in MLLMs. All benchmark data and code are available at https://echosafe-mllm.github.io.