Search papers, labs, and topics across Lattice.
5
13
10
56
By recursively aggregating reasoning chains, even smaller LLMs can now achieve performance competitive with much larger models, challenging the assumption that scale is the only path to improved reasoning.
Dramatically improve protein language models by simply post-training them to align with protein graphs, yielding a 59% increase in contact prediction accuracy.
Ditch the greedy heuristics: GFlowNets can learn to sample decision trees from the Bayesian posterior, outperforming standard methods and scaling consistently with ensemble size.
Forget Bayesian bells and whistles: in-context learning shines brightest with simple point estimators, outperforming complex posterior approximations in most scenarios.
Unlock better Earth monitoring: a massive, diverse remote sensing dataset and tailored Masked Autoencoder let you pre-train models that actually improve downstream task performance.