Search papers, labs, and topics across Lattice.
3
0
6
Language models can bootstrap their reasoning abilities without human labels by learning from each other's aggregated answers, achieving significant gains in mathematical reasoning.
LLMs excel at rapid prototyping of trading strategies, but SysTradeBench reveals that iterative patching leads to code convergence, suggesting human oversight is still needed for critical strategies requiring solution diversity.
Language model capabilities are surprisingly stable over time for most tasks, except for math reasoning, which continues to advance, offering a way to reliably translate compute budgets into performance expectations.