Search papers, labs, and topics across Lattice.
1
0
3
6
Forget knowledge distillation: pre-training ColBERT from scratch on public data alone beats models distilled from stronger, closed-source single-vector baselines.