Search papers, labs, and topics across Lattice.
KidsNanny, a two-stage multimodal content moderation pipeline, combines a ViT-based visual screener with OCR and a 7B language model for contextual reasoning to improve child safety. The pipeline routes object detection outputs as text to the language model, enabling reasoning about text embedded within images. Evaluated on the UnsafeBench Sexual category, KidsNanny achieves 81.40% accuracy and 86.16% F1 at 120 ms, outperforming baselines like ShieldGemma-2 and LlavaGuard in both accuracy and latency, particularly on text-dependent threats.
A multimodal pipeline integrating vision, OCR, and LLMs can achieve state-of-the-art content moderation performance at significantly lower latency than existing methods, especially for threats embedded in text.
We present KidsNanny, a two-stage multimodal content moderation architecture for child safety. Stage 1 combines a vision transformer (ViT) with an object detector for visual screening (11.7 ms); outputs are routed as text not raw pixels to Stage 2, which applies OCR and a text based 7B language model for contextual reasoning (120 ms total pipeline). We evaluate on the UnsafeBench Sexual category (1,054 images) under two regimes: vision-only, isolating Stage 1, and multimodal, evaluating the full Stage 1+2 pipeline. Stage 1 achieves 80.27% accuracy and 85.39% F1 at 11.7 ms; vision-only baselines range from 59.01% to 77.04% accuracy. The full pipeline achieves 81.40% accuracy and 86.16% F1 at 120 ms, compared to ShieldGemma-2 (64.80% accuracy, 1,136 ms) and LlavaGuard (80.36% accuracy, 4,138 ms). To evaluate text-awareness, we filter two subsets: a text+visual subset (257 images) and a text-only subset (44 images where safety depends primarily on embedded text). On text-only images, KidsNanny achieves 100% recall (25/25 positives; small sample) and 75.76% precision; ShieldGemma-2 achieves 84% recall and 60% precision at 1,136 ms. Results suggest that dedicated OCR-based reasoning may offer recall-precision advantages on text-embedded threats at lower latency, though the small text-only subset limits generalizability. By documenting this architecture and evaluation methodology, we aim to contribute to the broader research effort on efficient multimodal content moderation for child safety.