Search papers, labs, and topics across Lattice.
3
0
8
LLMs are far more alike than you think: shared biases and failure modes mean that ensembling them is less effective than you'd hope.
LLMs can now excel in high-frequency decision-making tasks like UAV pursuit, thanks to a novel reward normalization and consistency loss approach that aligns global and sub-semantic policies.
LLMs, despite their imperfections, can be surprisingly effective at causal discovery when combined with constraint-based methods, outperforming traditional statistical approaches.