Search papers, labs, and topics across Lattice.
4
0
8
0
LLMs exhibit significant geographical performance disparities and task-specific gaps when evaluated on the new GaoYao benchmark, highlighting the need for more nuanced multilingual and multicultural training.
LLMs can learn to reason over complex text-rich networks in a zero-shot manner using reinforcement learning alone, outperforming methods relying on supervised fine-tuning or distillation.
SIREN reveals that tapping into LLM internal states can drastically improve harmfulness detection while slashing the parameter count by 250 times.
Ditch the router: this MoE architecture lets experts decide when to activate, leading to better scalability and robustness.