Search papers, labs, and topics across Lattice.
This survey paper provides a technical overview of reinforcement learning (RL) techniques used to align and enhance large language models (LLMs), focusing on algorithms like PPO, Q-Learning, and Actor-Critic methods. It analyzes the application of methods like RLHF, RLAIF, DPO, and GRPO across domains like code generation and tool-augmented reasoning, presenting a comparative taxonomy based on reward modeling, feedback mechanisms, and optimization strategies. The survey highlights the dominance of RLHF for alignment and the effectiveness of outcome-based RL for stepwise reasoning, while also discussing challenges like reward hacking and computational costs, and outlining emerging directions like hybrid RL and verifier-guided training.
Despite the dominance of RLHF for LLM alignment, outcome-based RL methods are proving surprisingly effective at improving stepwise reasoning.
Reinforcement Learning (RL) has emerged as a transformative approach for aligning and enhancing Large Language Models (LLMs), addressing critical challenges in instruction following, ethical alignment, and reasoning capabilities. This survey offers a comprehensive foundation on the integration of RL with language models, highlighting prominent algorithms such as Proximal Policy Optimization (PPO), Q-Learning, and Actor-Critic methods. Additionally, it provides an extensive technical overview of RL techniques specifically tailored for LLMs, including foundational methods like Reinforcement Learning from Human Feedback (RLHF) and AI Feedback (RLAIF), as well as advanced strategies such as Direct Preference Optimization (DPO) and Group Relative Policy Optimization (GRPO). We systematically analyze their applications across domains, i.e., from code generation to tool-augmented reasoning. We also present a comparative taxonomy based on reward modeling, feedback mechanisms, and optimization strategies. Our evaluation highlights key trends. RLHF remains dominant for alignment, and outcome-based RL such as RLVR significantly improves stepwise reasoning. However, persistent challenges such as reward hacking, computational costs, and scalable feedback collection underscore the need for continued innovation. We further discuss emerging directions, including hybrid RL algorithms, verifier-guided training, and multi-objective alignment frameworks. This survey serves as a roadmap for researchers advancing RL-driven LLM development, balancing capability enhancement with safety and scalability.