Search papers, labs, and topics across Lattice.
The paper introduces ROSE, a new evaluation metric for NL2SQL that addresses the limitations of Execution Accuracy (EX) by focusing on semantic correctness relative to user intent. ROSE uses an adversarial Prover-Refuter cascade, where a SQL Prover assesses the predicted SQL and an Adversarial Refuter uses the ground-truth SQL to challenge the Prover's judgment. Experiments on a new expert-aligned validation set, ROSE-VEC, demonstrate that ROSE achieves significantly higher agreement with human experts (24% improvement in Cohen's Kappa) and a large-scale re-evaluation of 19 NL2SQL methods reveals valuable insights.
NL2SQL evaluation has a new champion: ROSE, an intent-centered metric that aligns 24% better with human judgment than existing metrics.
Execution Accuracy (EX), the widely used metric for evaluating the effectiveness of Natural Language to SQL (NL2SQL) solutions, is becoming increasingly unreliable. It is sensitive to syntactic variation, ignores that questions may admit multiple interpretations, and is easily misled by erroneous ground-truth SQL. To address this, we introduce ROSE, an intent-centered metric that focuses on whether the predicted SQL answers the question, rather than consistency with the ground-truth SQL under the reference-dependent paradigm. ROSE employs an adversarial Prover-Refuter cascade: SQL Prover assesses the semantic correctness of a predicted SQL against the user's intent independently, while Adversarial Refuter uses the ground-truth SQL as evidence to challenge and refine this judgment. On our expert-aligned validation set ROSE-VEC, ROSE achieves the best agreement with human experts, outperforming the next-best metric by nearly 24% in Cohen's Kappa. We also conduct a largescale re-evaluation of 19 NL2SQL methods, revealing four valuable insights. We release ROSE and ROSE-VEC to facilitate more reliable NL2SQL research.