Search papers, labs, and topics across Lattice.
2
0
6
Forget contrastive learning: LLM2Vec-Gen trains special tokens to represent the *response* an LLM would give, yielding SOTA embeddings with better safety and reasoning.
Post-training LLMs is more about finding the right "key" (a few kilobytes of parameters) to unlock pre-existing knowledge than learning new information.