Search papers, labs, and topics across Lattice.
Parallel In-Context Learning (Parallel-ICL) is introduced to mitigate the inference latency of multi-modal in-context learning (MM-ICL) in large vision-language models (LVLMs). Parallel-ICL partitions the demonstration context into shorter chunks processed in parallel, aggregating predictions via a weighted Product-of-Experts (PoE) ensemble. The method incorporates clustering-based context chunking for diversity and similarity-based context compilation for relevance weighting, achieving comparable performance to full-context MM-ICL with improved speed.
Overcome the quadratic attention bottleneck in vision-language models with Parallel-ICL, a method that achieves comparable performance to full-context learning while drastically reducing inference time.
Large vision-language models (LVLMs) employ multi-modal in-context learning (MM-ICL) to adapt to new tasks by leveraging demonstration examples. While increasing the number of demonstrations boosts performance, they incur significant inference latency due to the quadratic computational cost of Transformer attention with respect to the context length. To address this trade-off, we propose Parallel In-Context Learning (Parallel-ICL), a plug-and-play inference algorithm. Parallel-ICL partitions the long demonstration context into multiple shorter, manageable chunks. It processes these chunks in parallel and integrates their predictions at the logit level, using a weighted Product-of-Experts (PoE) ensemble to approximate the full-context output. Guided by ensemble learning theory, we introduce principled strategies for Parallel-ICL: (i) clustering-based context chunking to maximize inter-chunk diversity and (ii) similarity-based context compilation to weight predictions by query relevance. Extensive experiments on VQA, image captioning, and classification benchmarks demonstrate that Parallel-ICL achieves performance comparable to full-context MM-ICL, while significantly improving inference speed. Our work offers an effective solution to the accuracy-efficiency trade-off in MM-ICL, enabling dynamic task adaptation with substantially reduced inference overhead.