Search papers, labs, and topics across Lattice.
5
0
7
Parameter importance isn't forever: dynamically adapting which parameters are frozen during fine-tuning significantly improves generalization and reduces forgetting in LLMs.
Instruction-guided video editing can achieve impressive zero-shot performance simply by pre-training on motion-centric video restoration tasks *before* fine-tuning on paired editing data.
Llama-3 fine-tuned on a new dataset of counseling interactions smokes GPT-4o and Claude-3.5-Sonnet at evaluating and explaining effective responses to client resistance.
Achieve unified image generation by progressively disentangling and weaving together concept and localization representations within a diffusion framework, outperforming prior methods on diverse tasks.
LLMs can now predict client-perceived therapeutic alliance with significantly higher accuracy and provide interpretable rationales, bridging the gap between counselor evaluations and client experiences.