Search papers, labs, and topics across Lattice.
2
0
4
0
Forget likelihood scores and fine-tuning tricks: LLMs leak pre-training data membership through subtle, yet detectable, deviations in their gradient updates.
LLMs can power better local search, but only if you ground them geographically, align training with inference, and aggressively prune the vocabulary for speed.