Search papers, labs, and topics across Lattice.
This paper introduces Quantile Token Regression, a novel approach for predicting full conditional distributions in text regression tasks by embedding dedicated quantile tokens into the input sequence, facilitating direct input-output pathways through self-attention. By augmenting these quantile tokens with semantically similar neighbor instances, the method enhances local grounding for distribution estimates, addressing limitations of existing techniques that rely on shared representations. Experimental results demonstrate significant improvements in accuracy and prediction interval sharpness, particularly on smaller and more challenging datasets, with reductions in mean absolute percentage error (MAPE) by approximately 4 points.
Directly embedding quantile tokens into input sequences leads to sharper and more accurate distribution predictions, outperforming traditional methods by a substantial margin.
Many applications of LLM-based text regression require predicting a full conditional distribution rather than a single point value. We study distributional regression under empirical-quantile supervision, where each input is paired with multiple observed quantile outcomes, and the target distribution is represented by a dense grid of quantiles. We address two key limitations of current approaches: the lack of local grounding for distribution estimates, and the reliance on shared representations that create an indirect bottleneck between inputs and quantile outputs. In this paper, we introduce Quantile Token Regression, which, to our knowledge, is the first work to insert dedicated quantile tokens into the input sequence, enabling direct input-output pathways for each quantile through self-attention. We further augment these quantile tokens with retrieval, incorporating semantically similar neighbor instances and their empirical distributions to ground predictions with local evidence from similar instances. We also provide the first theoretical analysis of loss functions for quantile regression, clarifying which distributional objectives each optimizes. Experiments on the Inside Airbnb and StackSample benchmark datasets with LLMs ranging from 1.7B to 14B parameters show that quantile tokens with neighbors consistently outperform baselines (~4 points lower MAPE and 2x narrower prediction intervals), with especially large gains on smaller and more challenging datasets where quantile tokens produce substantially sharper and more accurate distributions.