Search papers, labs, and topics across Lattice.
1
0
3
Forget fine-tuning: surgically transplanting activation-selected modules between language models can double performance and fully recover capability gaps, all without training.