Search papers, labs, and topics across Lattice.
LongSumEval is introduced as a unified framework for long document summarization that uses question-answering to evaluate and refine summaries. It assesses summary quality by evaluating the answerability and factual alignment of question-answer pairs, providing interpretable scores and actionable feedback on coverage and factual errors. Experiments across seven benchmarks show that LongSumEval's QA-based evaluation module has significantly stronger agreement with human judgments than existing metrics, and the structured feedback enables quality improvements through self-refinement.
Forget ROUGE scores: QA-based evaluation finally offers interpretable feedback that lets summarization models self-improve without retraining.
Evaluating long document summaries remains the primary bottleneck in summarization research. Existing metrics correlate weakly with human judgments and produce aggregate scores without explaining deficiencies or guiding improvement, preventing effective refinement in applications requiring verifiable accuracy. We introduce LongSumEval, a unified framework bridging evaluation and generation through structured question-answering feedback. The framework operationalizes summary quality as answerability and factual alignment of question-answer pairs, generating interpretable scores and actionable feedback that identifies coverage gaps and factual inconsistencies. This resolves the misalignment where evaluation operates independently of generation objectives. Meta-evaluation of our QA-based evaluation module across seven benchmarks demonstrates substantially stronger agreement with human judgments compared to established metrics. Structured feedback enables significant quality improvements through self-refinement without retraining. By demonstrating that evaluation feedback can serve as executable instructions for generation, this work establishes a generalizable paradigm for aligning assessment with improvement, with direct implications for controllable text generation requiring verifiable accuracy and transparent quality control. All code and datasets will be released in GitHub for reproducibility.