Search papers, labs, and topics across Lattice.
3
0
8
Protein language models, like LLMs, suffer from a "Curse of Depth," where deeper layers contribute surprisingly little to the final prediction, suggesting opportunities for more efficient architectures.
Looping and depth-growing, two distinct methods for improving LLM reasoning, are actually two sides of the same iterative computation coin, and can be combined for even better results.
Object-centric representations win at compositional generalization when data is scarce, diverse, or compute-constrained, challenging the supremacy of dense representations in visually rich settings.