Search papers, labs, and topics across Lattice.
Adelaide University, Australia
4
0
10
0
LLM agents often say one thing, believe another, and do something completely different, especially when interacting with other agents.
TriMix reveals that prioritizing small, specialized models can dramatically improve low-resource language adaptation, overturning the assumption that bigger models always lead the way.
Stop hard-coding reasoning strategies for your LLM agent: a learned router that dynamically picks the best paradigm for each task boosts performance by up to 5.5%, beating even the best fixed strategy.
Video Transformers can achieve near-full attention accuracy with significantly less compute by focusing only on informative vertical vectors.