Search papers, labs, and topics across Lattice.
The paper introduces GRAFITE, a platform for continuous LLM evaluation focused on tracking and assessing model issues identified through user feedback. It addresses the problem of benchmark contamination and performance inflation by creating a repository of model problems and using LLM-as-a-judge for quality assurance. GRAFITE enables regression detection by facilitating side-by-side comparison of different model releases against a curated set of QA tests.
Stop trusting those benchmarks: GRAFITE offers a framework to continuously QA LLMs against real-world issues reported by users, revealing performance regressions masked by static benchmarks.
Large language models (LLMs) are largely motivated by their performance on popular topics and benchmarks at the time of their release. However, over time, contamination occurs due to significant exposure of benchmark data during training. This poses a risk of model performance inflation if testing is not carefully executed. To address this challenge, we present GRAFITE, a continuous LLM evaluation platform through a comprehensive system for maintaining and evaluating model issues. Our approach enables building a repository of model problems based on user feedback over time and offers a pipeline for assessing LLMs against these issues through quality assurance (QA) tests using LLM-as-a-judge. The platform enables side-by-side comparison of multiple models, facilitating regression detection across different releases. The platform is available at https://github.com/IBM/grafite. The demo video is available at www.youtube.com/watch?v=XFZyoleN56k.