Search papers, labs, and topics across Lattice.
4
0
9
Multimodal agents can now reason, plan, and execute actions more effectively by integrating perception as a core component, not just an auxiliary interface.
LLM-as-a-judge can be made far more reliable by explicitly modeling the aggregation weights of sub-features in a tree structure, achieving near-human agreement on complex writing tasks.
LLM safety crumbles in low-resource languages because alignment is skin-deep; LASA fixes this by injecting safety at the semantic core, slashing attack success by 88%.
Current judge models for instruction-following are surprisingly unreliable, but a new benchmark exposes their flaws and offers a path to better alignment.