Search papers, labs, and topics across Lattice.
This paper reviews techniques for optimizing Reinforcement Learning from Human Feedback (RLHF) and its variants, focusing on addressing sample efficiency and data collection challenges in aligning large language models (LLMs) with human preferences. It examines token-level direct preference optimization, active learning, and data augmentation, highlighting their roles in improving data utilization, reward model learning, and synthesizing high-quality training data. The review also covers the integration of multimodal feedback to provide richer information for better understanding tasks and human preferences.
RLHF's reliance on high-quality data can be significantly improved by token-level optimization, active learning, data augmentation, and multimodal feedback.
Large Language Modeling has been widely used in learning and life scenarios. How to make model responses more consistent with human preferences has become the core of research. However, issues such as sample efficiency and data collection constrain the model performance, so optimizing related techniques is especially important for further enhancement of large language models. This paper systematically reviews the Token-level direct preference optimization, active learning, data augmentation and multimodal feedback methods. The purpose of this paper is to analyze the application and effectiveness of these methods in solving the problems of sample efficiency and data collection in aligning human preferences to large models.Token-level direct preference optimization effectively balances data utilization and model performance improvement. Active learning makes the reward model learn more comprehensive models. Data enhancement is used to synthesize more high-quality data, which is mainly used to improve sample efficiency and model performance. And the introduction of multimodal feedback provides richer and more detailed information, which can help the model better understand the task and human preferences and improve the learning effect. This paper integrates and summarizes the current research results to help researchers gain a comprehensive understanding of the current state of development in the field.