Search papers, labs, and topics across Lattice.
5
21
9
18
OpenVLThinkerV2 leapfrogs existing open-source and proprietary multimodal models by using a novel Gaussian-based RL objective that ensures gradient equity across diverse visual tasks.
Ditch the polarity labels: SemEval-2026's DimABSA task reveals how modeling sentiment along valence-arousal dimensions unlocks nuanced understanding in both aspect-based sentiment analysis and stance detection.
Forget hand-designed agent communication topologies: Agent Q-Mix learns decentralized communication strategies that boost accuracy and token efficiency in LLM multi-agent systems.
Scaling VLMs won't magically unlock reasoning skills; you need to address the reporting bias in training data that suppresses tacit information.
Forget hand-annotated data: Magnet distills multi-turn tool-use skills into LLMs by automatically generating training trajectories that outperform even Gemini 1.5 Pro.