Search papers, labs, and topics across Lattice.
The paper introduces FormulaCode, a benchmark for evaluating LLM coding agents on real-world, repository-scale code optimization tasks. FormulaCode uses 957 performance bottlenecks mined from scientific Python repositories, each paired with expert patches and community-maintained performance workloads, enabling multi-objective evaluation. Experiments using FormulaCode show that current LLM agents struggle with repository-scale, multi-objective code optimization.
LLM coding agents still fall short when optimizing real-world codebases, especially when balancing multiple objectives like performance and correctness, as revealed by the new FormulaCode benchmark.
Large language model (LLM) coding agents increasingly operate at the repository level, motivating benchmarks that evaluate their ability to optimize entire codebases under realistic constraints. Existing code benchmarks largely rely on synthetic tasks, binary correctness signals, or single-objective evaluation, limiting their ability to assess holistic optimization behavior. We introduce FormulaCode, a benchmark for evaluating agentic optimization on large, real-world codebases with fine-grained, multi-objective performance metrics. FormulaCode comprises 957 performance bottlenecks mined from scientific Python repositories on GitHub, each paired with expert-authored patches and, on average, 264.6 community-maintained performance workloads per task, enabling the holistic ability of LLM agents to optimize codebases under realistic correctness and performance constraints. Our evaluations reveal that repository-scale, multi-objective optimization remains a major challenge for frontier LLM agents. Project website at: https://formula-code.github.io