Search papers, labs, and topics across Lattice.
The paper investigates why supervised fine-tuning (SFT) of reasoning models like Qwen3-8B using data from stronger teacher models often degrades performance. They identify stylistic divergence between teacher and student outputs as a key issue. To address this, they propose TESSY, a framework where teacher and student models alternately generate style and non-style tokens to create student-consistent SFT data, achieving significant performance gains on code generation tasks.
Fine-tuning smaller reasoning models on data from larger models can backfire spectacularly unless you carefully match the stylistic nuances of the student.
A widely adopted strategy for model enhancement is to use synthetic data generated by a stronger model for supervised fine-tuning (SFT). However, for emerging reasoning models like Qwen3-8B, this approach often fails to improve reasoning capabilities and can even lead to a substantial drop in performance. In this work, we identify substantial stylistic divergence between teacher generated data and the distribution of student as a major factor impacting SFT. To bridge this gap, we propose a Teacher-Student Cooperation Data Synthesis framework (TESSY), which interleaves teacher and student models to alternately generate style and non-style tokens. Consequently, TESSY produces synthetic sequences that inherit the advanced reasoning capabilities of the teacher while maintaining stylistic consistency with the distribution of the student. In experiments on code generation using GPT-OSS-120B as the teacher, fine-tuning Qwen3-8B on teacher-generated data leads to performance drops of 3.25% on LiveCodeBench-Pro and 10.02% on OJBench, whereas TESSY achieves improvements of 11.25% and 6.68%.