Search papers, labs, and topics across Lattice.
The paper introduces MathTutorBench, a new benchmark for evaluating the pedagogical capabilities of LLMs in math tutoring, addressing the lack of comprehensive evaluation tools in this domain. They trained a reward model to score the pedagogical quality of tutor responses, demonstrating its ability to differentiate expert from novice teachers. Experiments using MathTutorBench revealed a trade-off between subject expertise and pedagogical skill in LLMs, and identified challenges in maintaining tutoring quality over longer dialogues.
LLMs that excel at math don't necessarily make good math tutors, revealing a surprising trade-off between subject matter expertise and pedagogical skill.
Evaluating the pedagogical capabilities of AI-based tutoring models is critical for making guided progress in the field. Yet, we lack a reliable, easy-to-use, and simple-to-run evaluation that reflects the pedagogical abilities of models. To fill this gap, we present MathTutorBench, an open-source benchmark for holistic tutoring model evaluation. MathTutorBench contains a collection of datasets and metrics that broadly cover tutor abilities as defined by learning sciences research in dialog-based teaching. To score the pedagogical quality of open-ended teacher responses, we train a reward model and show it can discriminate expert from novice teacher responses with high accuracy. We evaluate a wide set of closed- and open-weight models on MathTutorBench and find that subject expertise, indicated by solving ability, does not immediately translate to good teaching. Rather, pedagogy and subject expertise appear to form a trade-off that is navigated by the degree of tutoring specialization of the model. Furthermore, tutoring appears to become more challenging in longer dialogs, where simpler questioning strategies begin to fail. We release the benchmark, code, and leaderboard openly to enable rapid benchmarking of future models.