Search papers, labs, and topics across Lattice.
1
0
3
MLLMs are surprisingly robust to catastrophic forgetting during fine-tuning, needing only simple regularization or data-hybrid training to maintain performance.