Search papers, labs, and topics across Lattice.
Kyoto University
2
0
6
6
Forget comparing models with benchmarks – mapping them by prompt-response likelihoods reveals hidden relationships between architecture, training data, and even how prompts compose.
Forget expensive distillation – aligning language models can be as simple as carefully choosing the right mix of pretraining data based on log-likelihood differences.