Search papers, labs, and topics across Lattice.
4
0
6
29
LLMs can be tricked into falsely penalizing objective reporting of conspiracy theories, but this "Reporter Trap" can be overcome with an adversarial "Anti-Echo Chamber" architecture.
Parameter-efficient fine-tuning and instruction tuning can enable strong performance in multilingual, multi-domain aspect-based sentiment analysis without requiring extensive resources.
LLMs systematically fail at multi-label causal reasoning due to shared inductive biases like causal chain incompleteness, even across diverse model families.
LLMs struggle with native Greek, as evidenced by substantial performance gaps revealed by the new GreekMMLU benchmark, highlighting the need for better adaptation and evaluation methods.