Search papers, labs, and topics across Lattice.
3
0
8
Protein language models, like LLMs, suffer from a "Curse of Depth," where deeper layers contribute surprisingly little to the final prediction, suggesting opportunities for more efficient architectures.
Object-centric representations win at compositional generalization when data is scarce, diverse, or compute-constrained, challenging the supremacy of dense representations in visually rich settings.
KL divergence can fool you: matching a teacher's predictions doesn't guarantee preserving its internal representations, but distilling with logit distance does.