Search papers, labs, and topics across Lattice.
This paper introduces the concept of "Agent Scaling Laws" for LLM-based educational agents, arguing that performance scales not just with model size but also with factors like role definition clarity, skill depth, and tool completeness. They propose AgentProfile, a structured JSON specification, to systematically grow agent capabilities. The authors implement EduClaw, a multi-agent platform with 330+ educational agent profiles, and empirically demonstrate that agent performance scales predictably with profile structural richness.
Forget brute-force scaling: the secret to better educational AI agents lies in carefully structuring their roles, skills, and tools.
While scaling laws for Large Language Models (LLMs) have been extensively studied along dimensions of model parameters, training data, and compute, the scaling behavior of LLM-based educational agents remains unexplored. We propose that educational agent capability scales not merely with the underlying model size, but through structured dimensions that we collectively term the Agent Scaling Law: role definition clarity, skill depth, tool completeness, runtime capability, and educator expertise injection. Central to this framework is AgentProfile, a structured JSON-based specification that serves as the mechanism enabling systematic capability growth of educational agents. We present EduClaw, a profile-driven multi-agent platform that operationalizes this scaling law, demonstrating its effectiveness through the construction and deployment of 330+ educational agent profiles encompassing 1,100+ skill modules across K-12 subjects. Our empirical observations suggest that educational agent performance scales predictably with profile structural richness. We identify two complementary scaling axes -- Tool Scaling and Skill Scaling -- as future directions, arguing that the path to more capable educational AI lies not solely in larger models, but in stronger structured capability systems.