Search papers, labs, and topics across Lattice.
The paper introduces PhysicsSolutionAgent (PSA), an autonomous agent that generates up to six-minute-long explanation videos for physics problems using Manim animations. PSA leverages GPT-5-mini for Manim code generation and incorporates a VLM-based feedback loop to iteratively improve video quality, evaluated across 15 quantitative parameters. Experiments on 32 physics problems reveal that PSA achieves a 100% video completion rate with an average automated score of 3.8/5, but qualitative analysis exposes limitations in Manim code generation and multimodal reasoning.
LLMs can now generate physics explanation videos up to 6 minutes long, but their visual reasoning and the reliability of auto-generated Manim code still need significant improvement.
Explaining numerical physics problems often requires more than text-based solutions; clear visual reasoning can substantially improve conceptual understanding. While large language models (LLMs) demonstrate strong performance on many physics questions in textual form, their ability to generate long, high-quality visual explanations remains insufficiently explored. In this work, we introduce PhysicsSolutionAgent (PSA), an autonomous agent that generates physics-problem explanation videos of up to six minutes using Manim animations. To evaluate the generated videos, we design an assessment pipeline that performs automated checks across 15 quantitative parameters and incorporates feedback from a vision-language model (VLM) to iteratively improve video quality. We evaluate PSA on 32 videos spanning numerical and theoretical physics problems. Our results reveal systematic differences in video quality depending on problem difficulty and whether the task is numerical or theoretical. Using GPT-5-mini, PSA achieves a 100% video-completion rate with an average automated score of 3.8/5. However, qualitative analysis and human inspection uncover both minor and major issues, including visual layout inconsistencies and errors in how visual content is interpreted during feedback. These findings expose key limitations in reliable Manim code generation and highlight broader challenges in multimodal reasoning and evaluation for visual explanations of numerical physics problems. Our work underscores the need for improved visual understanding, verification, and evaluation frameworks in future multimodal educational systems