Search papers, labs, and topics across Lattice.
The paper introduces GPR-bench, a bilingual (English and Japanese) benchmark and automated evaluation pipeline for regression testing of generative AI systems across eight general-purpose task categories. The benchmark uses "LLM-as-a-Judge" scoring to evaluate correctness and conciseness across different model versions and prompt configurations. Experiments with gpt-4o-mini, o3-mini, and o4-mini reveal that while newer models show modest improvements in correctness, concise-writing prompts significantly enhance conciseness with minimal accuracy degradation.
Prompt engineering for conciseness delivers statistically significant gains (+12.37 pp) with minimal accuracy loss, while newer LLM versions show only marginal, statistically insignificant improvements on the GPR-bench benchmark.
Reproducibility and reliability remain pressing challenges for generative AI systems whose behavior can drift with each model update or prompt revision. We introduce GPR-bench, a lightweight, extensible benchmark that operationalizes regression testing for general purpose use cases. GPR-bench couples an open, bilingual (English and Japanese) dataset covering eight task categories (e.g., text generation, code generation, and information retrieval) and 10 scenarios in each task categories (80 total test cases for each language) with an automated evaluation pipeline that employs"LLM-as-a-Judge"scoring of correctness and conciseness. Experiments across three recent model versions - gpt-4o-mini, o3-mini, and o4-mini - and two prompt configurations (default versus concise-writing instruction) reveal heterogeneous quality. Our results show that newer models generally improve correctness, but the differences are modest and not statistically significant, suggesting that GPR-bench may not be sufficiently challenging to differentiate between recent model versions. In contrast, the concise-writing instruction significantly enhances conciseness (+12.37 pp, Mann-Whitney U test: p<0.001, effect size r = 0.2995) with minimal degradations on accuracy (-1.7 pp), demonstrating the effectiveness of prompt engineering. Released under the MIT License, GPR- bench lowers the barrier to initiating reproducibility monitoring and provides a foundation for community-driven extensions, while also raising important considerations about benchmark design for rapidly evolving language models.