Search papers, labs, and topics across Lattice.
3
6
6
29
By recursively aggregating reasoning chains, even smaller LLMs can now achieve performance competitive with much larger models, challenging the assumption that scale is the only path to improved reasoning.
Ditch the greedy heuristics: GFlowNets can learn to sample decision trees from the Bayesian posterior, outperforming standard methods and scaling consistently with ensemble size.
Forget Bayesian bells and whistles: in-context learning shines brightest with simple point estimators, outperforming complex posterior approximations in most scenarios.