Search papers, labs, and topics across Lattice.
3
0
8
0
Forget training from scratch: Nexusformer lets you scale Transformers by nonlinearly expanding attention, inheriting knowledge and slashing compute by up to 41.5%.
LLMs get *more* creative at generating molecules when you add *more* constraints, defying the intuition that creativity thrives on freedom.
GPT-4's mobile proactivity is so bad (7.4% success) that a fine-tuned Qwen2 model more than doubles its performance, revealing a critical gap in current MLLMs and a path to improvement.