Search papers, labs, and topics across Lattice.
The paper introduces Test-Time Rethinking for In-Context Reinforcement Learning (TR-ICRL), a framework that improves LLM performance on reasoning and knowledge-intensive tasks by iteratively refining answers using pseudo-labels derived from retrieved, unlabeled evaluation instances. TR-ICRL generates candidate answers for retrieved instances, derives pseudo-labels via majority voting, and uses these labels to provide reward messages and feedback to guide iterative refinement. Experiments show that TR-ICRL significantly boosts performance, improving Qwen2.5-7B by 21.23% on MedQA and 137.59% on AIME2024.
LLMs can achieve massive performance gains on reasoning and knowledge-intensive tasks simply by iteratively refining their answers using pseudo-labels derived from unlabeled data.
In-Context Reinforcement Learning (ICRL) enables Large Language Models (LLMs) to learn online from external rewards directly within the context window. However, a central challenge in ICRL is reward estimation, as models typically lack access to ground-truths during inference. To address this limitation, we propose Test-Time Rethinking for In-Context Reinforcement Learning (TR-ICRL), a novel ICRL framework designed for both reasoning and knowledge-intensive tasks. TR-ICRL operates by first retrieving the most relevant instances from an unlabeled evaluation set for a given query. During each ICRL iteration, LLM generates a set of candidate answers for every retrieved instance. Next, a pseudo-label is derived from this set through majority voting. This label then serves as a proxy to give reward messages and generate formative feedbacks, guiding LLM through iterative refinement. In the end, this synthesized contextual information is integrated with the original query to form a comprehensive prompt, with the answer determining through a final round of majority voting. TR-ICRL is evaluated on mainstream reasoning and knowledge-intensive tasks, where it demonstrates significant performance gains. Remarkably, TR-ICRL improves Qwen2.5-7B by 21.23% on average on MedQA and even 137.59% on AIME2024. Extensive ablation studies and analyses further validate the effectiveness and robustness of our approach. Our code is available at https://github.com/pangpang-xuan/TR_ICRL.