Search papers, labs, and topics across Lattice.
2
0
4
Today's best LLMs fail spectacularly at long-horizon reasoning, achieving under 10% accuracy on a new benchmark designed to isolate this critical capability.
LLMs don't just change *how* we write, they subtly distort *what* we mean, leading to blander, less insightful, and potentially biased communication.