Search papers, labs, and topics across Lattice.
The paper introduces CIMR, a framework for improving LVLM performance on complex, multi-step multimodal instruction following by incorporating context-aware iterative reasoning and self-correction. CIMR employs a two-stage process of initial reasoning/response generation followed by iterative refinement using parsed multimodal feedback, dynamically fusing textual, visual, and contextual features at each step. Fine-tuning LLaVA-1.5-7B with CIMR achieves 91.5% accuracy on the Multi-modal Action Planning (MAP) dataset, surpassing GPT-4V and other state-of-the-art models.
LVLMs can now iteratively self-correct and reason about multi-modal instructions, achieving SOTA performance by dynamically fusing textual, visual, and contextual features.
The rapid advancement of Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) has enhanced our ability to process and generate human language and visual information. However, these models often struggle with complex, multi-step multi-modal instructions that require logical reasoning, dynamic feedback integration, and iterative self-correction. To address this, we propose CIMR: Contextualized Iterative Multimodal Reasoning, a novel framework that introduces a context-aware iterative reasoning and self-correction module. CIMR operates in two stages: initial reasoning and response generation, followed by iterative refinement using parsed multi-modal feedback. A dynamic fusion module deeply integrates textual, visual, and contextual features at each step. We fine-tune LLaVA-1.5-7B on the Visual Instruction Tuning (VIT) dataset and evaluate CIMR on the newly introduced Multi-modal Action Planning (MAP) dataset. CIMR achieves 91.5% accuracy, outperforming state-of-the-art models such as GPT-4V (89.2%), LLaVA-1.5 (78.5%), MiniGPT-4 (75.3%), and InstructBLIP (72.8%), demonstrating the efficacy of its iterative reasoning and self-correction capabilities in complex tasks.