Search papers, labs, and topics across Lattice.
Shanghai AI Laboratory
5
0
11
1
Automating LLM fine-tuning is now possible: a multi-agent system, TREX, matches or exceeds human performance on a diverse set of real-world tasks.
LLMs can now automatically evolve and optimize GPU kernels to beat hand-tuned and proprietary models like Gemini and Claude.
Fine-tuning smaller reasoning models on data from larger models can backfire spectacularly unless you carefully match the stylistic nuances of the student.
SpeechLLMs can be made significantly faster and more accurate at question answering by explicitly training their attention mechanisms to focus on relevant evidence.
Automating software repository build and testing across languages and platforms is now possible, unlocking scalable benchmarking and training for coding agents.