Search papers, labs, and topics across Lattice.
The paper introduces CodeIF, a new benchmark to evaluate instruction-following capabilities of LLMs in code generation across tasks like function synthesis, debugging, refactoring, and explanation. Through extensive experiments, the authors analyze the strengths and weaknesses of current LLMs in adhering to instructions and generating consistent, maintainable code. The results highlight the limitations of existing models and suggest avenues for improving adaptability and reliability in automated code generation.
LLMs still struggle to consistently follow instructions when generating code, as revealed by the new CodeIF benchmark spanning function synthesis to code explanation.
With the rapid advancement of Large Language Models (LLMs), the demand for robust instruction-following capabilities in code generation tasks has grown significantly. Code generation not only facilitates faster prototyping and automated testing, but also augments developer efficiency through improved maintainability and reusability of code. In this paper, we introduce CodeIF, the first benchmark specifically designed to assess the abilities of LLMs to adhere to task-oriented instructions within diverse code generation scenarios. CodeIF encompasses a broad range of tasks, including function synthesis, error debugging, algorithmic refactoring, and code explanation, thereby providing a comprehensive suite to evaluate model performance across varying complexity levels and programming domains. We conduct extensive experiments with LLMs, analyzing their strengths and limitations in meeting the demands of these tasks. The experimental results offer valuable insights into how well current models align with human instructions, as well as the extent to which they can generate consistent, maintainable, and contextually relevant code. Our findings not only underscore the critical role that instruction-following LLMs can play in modern software development, but also illuminate pathways for future research aimed at enhancing their adaptability, reliability, and overall effectiveness in automated code generation. CodeIF data and code are publicly available: https://github.com/lin-rany/codeIF