Search papers, labs, and topics across Lattice.
This paper introduces an agentic framework for autonomous multimodal query processing that dynamically orchestrates specialized tools across various modalities. The framework uses a central "Supervisor" to decompose queries, delegate subtasks to modality-appropriate tools using learned routing (RouteLLM for text, SLM-assisted decomposition for non-text), and synthesize results. Experiments across 15 task categories demonstrate a 72% reduction in time-to-answer, 85% reduction in rework, and 67% cost reduction compared to a hierarchical baseline, while maintaining accuracy.
Forget rigid decision trees: a dynamically orchestrated agent slashes multimodal query processing costs by 67% while boosting speed and reducing rework.
We present an agentic AI framework for autonomous multimodal query processing that coordinates specialized tools across text, image, audio, video, and document modalities. A central Supervisor dynamically decomposes user queries, delegates subtasks to modality-appropriate tools (e.g., object detection, OCR, speech transcription), and synthesizes results through adaptive routing strategies rather than predetermined decision trees. For text-only queries, the framework uses learned routing via RouteLLM, while non-text paths use SLM-assisted modality decomposition. Evaluated on 2,847 queries across 15 task categories, our framework achieves 72% reduction in time-to-accurate-answer, 85% reduction in conversational rework, and 67% cost reduction compared to the matched hierarchical baseline while maintaining accuracy parity. These results demonstrate that intelligent centralized orchestration fundamentally improves multimodal AI deployment economics.