Search papers, labs, and topics across Lattice.
3
0
7
LLMs exhibit an "Alignment Illusion," where their apparent safety collapses under pressure, with the most capable models showing the most dramatic failures.
Current LLMs struggle with biologically complex tasks in single-cell biology, particularly those requiring mechanistic or causal understanding, highlighting the need for more biology-aligned foundation models.
MLLMs can be significantly boosted by curriculum learning that focuses on reward design rather than data selection, dynamically weighting generalized rubrics based on the model's evolving competence.