Search papers, labs, and topics across Lattice.
3
0
8
0
LLMs can achieve massive performance gains on reasoning and knowledge-intensive tasks simply by iteratively refining their answers using pseudo-labels derived from unlabeled data.
Forget rigid templates: RL-optimized verbalization of user logs boosts LLM-based recommendation accuracy by up to 93%.
Forget chasing massive datasets: synthesizing data to fill gaps in your LLM's feature space can boost performance across diverse tasks and even transfer knowledge between model families.