Search papers, labs, and topics across Lattice.
2
0
3
24
LLMs choke on long numerical sequences, but a simple separator token trick can boost accuracy by 35% and cut token costs by 16%鈥攚ithout any training.
LLM-based recommendation gets a boost: initializing item embeddings with semantic keywords and aligning tokens to item clusters significantly improves performance.