Search papers, labs, and topics across Lattice.
1
0
3
2
By fusing orthogonalized momentum with adaptive noise scaling, NAMO and NAMO-D offer a surprisingly simple recipe for faster and more stable LLM training compared to AdamW and Muon.