Search papers, labs, and topics across Lattice.
3
0
5
2
LLMs can be tricked into falsely penalizing objective reporting of conspiracy theories, but this "Reporter Trap" can be overcome with an adversarial "Anti-Echo Chamber" architecture.
Parameter-efficient fine-tuning and instruction tuning can enable strong performance in multilingual, multi-domain aspect-based sentiment analysis without requiring extensive resources.
LLMs systematically fail at multi-label causal reasoning due to shared inductive biases like causal chain incompleteness, even across diverse model families.