Search papers, labs, and topics across Lattice.
This paper introduces LiteCoST, a two-pillar framework for long-document question answering that uses small language models (SLMs) to achieve high accuracy and low latency. The first pillar, Chain-of-Structured-Thought (CoST), uses a schema-aware instruction to guide a strong LLM to produce a step-wise reasoning trace and structured output, providing auditable supervision. The second pillar fine-tunes SLMs on the LLM-generated CoST data using supervised fine-tuning and Group Relative Policy Optimization (GRPO) with triple rewards, resulting in LLM-comparable quality with 2-4x lower latency than GPT-4o and DeepSeek-R1.
Forget slow, bloated LLMs – this work shows you can get GPT-4o quality on long-document QA with a 3B model and a clever structure-first distillation approach.
Large language models (LLMs) are widely applied to data analytics over documents, yet direct reasoning over long, noisy documents remains brittle and error-prone. Hence, we study document question answering (QA) that consolidates dispersed evidence into a structured output (e.g., a table, graph, or chunks) to support reliable, verifiable QA. We propose a two-pillar framework, LiteCoST, to achieve both high accuracy and low latency with small language models (SLMs). Pillar 1: Chain-of-Structured-Thought (CoST). We introduce a CoST template, a schema-aware instruction that guides a strong LLM to produce both a step-wise CoST trace and the corresponding structured output. The process induces a minimal structure, normalizes entities/units, aligns records, serializes the output, and verifies/refines it, yielding auditable supervision. Pillar 2: SLM fine-tuning. The compact models are trained on LLM-generated CoST data in two stages: Supervised Fine-Tuning for structural alignment, followed by Group Relative Policy Optimization (GRPO) incorporating triple rewards for answer/format quality and process consistency. By distilling structure-first behavior into SLMs, this approach achieves LLM-comparable quality on multi-domain long-document QA using 3B/7B SLMs, while delivering 2-4x lower latency than GPT-4o and DeepSeek-R1 (671B). The code is available at https://github.com/HKUSTDial/LiteCoST.