Search papers, labs, and topics across Lattice.
This paper introduces a benchmark for text-based compositional multi-tasking in LLMs, where a single input requires the model to perform multiple tasks simultaneously, such as translation and summarization. To address the computational constraints of on-device applications, the authors propose a novel method called Learnable Calibration for efficient adapter merging. Experiments on the proposed benchmark demonstrate the effectiveness of Learnable Calibration in handling compositional tasks with limited resources.
On-device LLMs can now tackle complex tasks like generating translated summaries thanks to a new benchmark and calibration method tailored for compositional multi-tasking.
Adapter parameters provide a mechanism to modify the behavior of machine learning models and have gained significant popularity in the context of large language models (LLMs) and generative AI. These parameters can be merged to support multiple tasks via a process known as task merging. However, prior work on merging in LLMs, particularly in natural language processing, has been limited to scenarios where each test example addresses only a single task. In this paper, we focus on on-device settings and study the problem of text-based compositional multi-tasking, where each test example involves the simultaneous execution of multiple tasks. For instance, generating a translated summary of a long text requires solving both translation and summarization tasks concurrently. To facilitate research in this setting, we propose a benchmark comprising four practically relevant compositional tasks. We also present an efficient method (Learnable Calibration) tailored for on-device applications, where computational resources are limited, emphasizing the need for solutions that are both resource-efficient and high-performing. Our contributions lay the groundwork for advancing the capabilities of LLMs in real-world multi-tasking scenarios, expanding their applicability to complex, resource-constrained use cases.