Search papers, labs, and topics across Lattice.
Beijing University of Posts and Telecommunications
3
0
8
LLMs can now write better quantitative trading algorithms than humans, thanks to a new framework that turns unstructured financial reports into executable code.
MLLMs can be significantly boosted by curriculum learning that focuses on reward design rather than data selection, dynamically weighting generalized rubrics based on the model's evolving competence.
LLMs struggle to understand nuanced values across languages, with accuracy dropping below 77% and varying by over 20% between languages, as revealed by the new X-Value benchmark.