Search papers, labs, and topics across Lattice.
2
0
6
7
Self-distillation in LLMs can leak information and destabilize training, but combining it with verifiable rewards yields a sweet spot for improved convergence and stability.
Fake news in short videos often betrays itself through subtle inconsistencies between text, visuals, and audio, a weakness MAGIC3 exploits to achieve VLM-level accuracy at a fraction of the cost.