Search papers, labs, and topics across Lattice.
Bonn-Aachen International Center for Information Technology (b-it) / CAISA Lab, Lamarr Institute for Machine Learning and Artificial Intelligence
4
0
7
Forget massive multilingual models: LilMoo proves a carefully trained 0.6B Hindi model can beat them at their own game.
Open-source Portuguese LLMs just got a major upgrade: Tucano 2 models outperform existing options thanks to a new recipe of curated and synthetic data, plus targeted post-training for RAG and tool use.
LLMs' apparent Theory of Mind evaporates when tasks are slightly perturbed, and Chain-of-Thought prompting, surprisingly, can make things worse.
Forget prompt engineering: agent-based LLM data augmentation preserves labels better and boosts ABSA performance, especially for smaller models.