Search papers, labs, and topics across Lattice.
The paper investigates biases in the Chatbot Arena leaderboard, a popular platform for ranking AI systems, revealing that undisclosed private testing practices and data access asymmetries distort the evaluation playing field. It demonstrates that selective disclosure of performance results by certain providers, like Meta, Google, and OpenAI, leads to biased Arena scores and overfitting to Arena-specific dynamics. The study quantifies the data access disparities, showing that closed models receive disproportionately more data compared to open-weight models, and estimates the performance gains achievable through access to Arena data.
Chatbot Arena, the go-to LLM leaderboard, is systematically gamed by undisclosed private testing and data access advantages, leading to biased rankings and overfitting.
Measuring progress is fundamental to the advancement of any scientific field. As benchmarks play an increasingly central role, they also grow more susceptible to distortion. Chatbot Arena has emerged as the go-to leaderboard for ranking the most capable AI systems. Yet, in this work we identify systematic issues that have resulted in a distorted playing field. We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired. We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results. At an extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release. We also establish that proprietary closed models are sampled at higher rates (number of battles) and have fewer models removed from the arena than open-weight and open-source alternatives. Both these policies lead to large data access asymmetries over time. Providers like Google and OpenAI have received an estimated 19.2% and 20.4% of all data on the arena, respectively. In contrast, a combined 83 open-weight models have only received an estimated 29.7% of the total data. We show that access to Chatbot Arena data yields substantial benefits; even limited additional data can result in relative performance gains of up to 112% on the arena distribution, based on our conservative estimates. Together, these dynamics result in overfitting to Arena-specific dynamics rather than general model quality. The Arena builds on the substantial efforts of both the organizers and an open community that maintains this valuable evaluation platform. We offer actionable recommendations to reform the Chatbot Arena's evaluation framework and promote fairer, more transparent benchmarking for the field