Search papers, labs, and topics across Lattice.
Department of Computer Science and Engineering, Korea University
3
3
5
Synthetically corrupting data with a taxonomy of OCR errors lets you train LLMs to fix real-world OCR mistakes and dramatically improve document understanding.
LLMs can now perform inference without ever seeing raw text, opening the door to privacy-preserving applications without sacrificing performance.
Forget just mining hard negatives: the secret to better knowledge distillation for retrieval lies in matching the *entire* score distribution of your teacher model.