Search papers, labs, and topics across Lattice.
3
0
5
Forget quadratic complexity: ULTRA-HSTU achieves 21x faster inference and 4-8% better engagement in large-scale recommendation by co-designing input sequences, sparse attention, and model topology.
Achieve zero-collision embedding tables in production recommenders without sacrificing training speed, unlocking better personalization via fresher and higher-quality item embeddings.
Ditch ANN search altogether: MFLI learns a hierarchical index alongside item embeddings, boosting recall by up to 11.8% and cold-content delivery by 57.29% in large-scale recommender systems.