Search papers, labs, and topics across Lattice.
4
0
9
0
Continual learning methods for Video-LLMs face a fundamental trade-off: mitigating catastrophic forgetting often comes at the cost of generalization or prohibitive computational overhead.
Forget brute-force scaling: the secret to better educational AI agents lies in carefully structuring their roles, skills, and tools.
LLMs can gain 40% in knowledge transfer efficiency by mining skills from open-source agent repositories, without needing retraining.
Forget hand-engineered reward functions: this method uses language models to learn factorized world states that generalize to new goals and environments, outperforming LLM-as-a-Judge in zero-shot reward prediction.