Search papers, labs, and topics across Lattice.
This paper introduces Dual-Aware Adaptive Transmission (DAT), a system for efficient multimodal LLM inference on video streams in edge-cloud environments. DAT employs a lightweight edge-side model to filter non-target frames and trigger MLLM inference only when necessary, coupled with a fine-tuning strategy to improve event understanding and output consistency. The system further optimizes multi-stream transmission based on semantics and bandwidth, achieving high accuracy, consistency, and low latency in event alerting and visual evidence delivery.
Achieve up to 77.5% reduction in semantic alert delay and 98.33% visual evidence delivery within 0.5s by intelligently cascading small and large models and adaptively transmitting data in edge-cloud MLLM systems.
Multimodal large language models (MLLMs) have shown strong capability in semantic understanding and visual reasoning, yet their use on continuous video streams in bandwidth-constrained edge-cloud systems incurs prohibitive computation and communication overhead and hinders low-latency alerting and effective visual evidence delivery. To address this challenge, we propose DAT to achieve high-quality semantic generation, low-latency event alerting, and effective visual evidence supplementation. To reduce unnecessary deep reasoning costs, we propose a collaborative small-large model cascade. A lightweight edge-side small model acts as a gating module to filter non-target-event frames and perform object detection, triggering MLLM inference only for suspicious frames. Building on this, we introduce an efficient fine-tuning strategy with visual guidance and semantic prompting, which improves structured event understanding, object detection, and output consistency. To ensure low-latency semantic alerting and effective visual evidence supplementation under bandwidth constraints, we further devise a semantics and bandwidth-aware multi-stream adaptive transmission optimization method. Experimental results show that DAT achieves 98.83% recognition accuracy and 100% output consistency. Under severe congestion, it reduces weighted semantic alert delay by up to 77.5% and delivers 98.33% of visual evidence within 0.5 s, demonstrating the effectiveness of jointly optimizing cascade inference and elastic transmission.