Search papers, labs, and topics across Lattice.
The paper introduces OptiVerse, a new benchmark comprising 1,000 optimization problems across diverse domains like stochastic, dynamic, game, and optimal control, designed to comprehensively evaluate LLMs' optimization capabilities. Experiments with 22 LLMs reveal a significant performance drop on harder problems, with even state-of-the-art models achieving only 27% accuracy, highlighting modeling and logic errors as key bottlenecks. To mitigate these errors, the authors propose a Dual-View Auditor Agent, which enhances LLM modeling accuracy without substantial time overhead.
Even the most advanced LLMs like GPT-5.2 and Gemini-3 stumble on complex optimization problems, achieving only 27% accuracy on a new benchmark spanning stochastic, dynamic, and game optimization.
While Large Language Models (LLMs) demonstrate remarkable reasoning, complex optimization tasks remain challenging, requiring domain knowledge and robust implementation. However, existing benchmarks focus narrowly on Mathematical Programming and Combinatorial Optimization, hindering comprehensive evaluation. To address this, we introduce OptiVerse, a comprehensive benchmark of 1,000 curated problems spanning neglected domains, including Stochastic Optimization, Dynamic Optimization, Game Optimization, and Optimal Control, across three difficulty levels: Easy, Medium, and Hard. The experiments with 22 LLMs of different sizes reveal sharp performance degradation on hard problems, where even advanced models like GPT-5.2 and Gemini-3 struggle to exceed 27% accuracy. Through error analysis, we identify that modeling&logic errors remain the primary bottleneck. Consequently, we propose a Dual-View Auditor Agent that improves the accuracy of the LLM modeling process without introducing significant time overhead. OptiVerse will serve as a foundational platform for advancing LLMs in solving complex optimization challenges.