Search papers, labs, and topics across Lattice.
Concordia University, Montreal, Canada
3
0
6
0
Smaller LLMs can overcome causal hallucination in event causality identification with carefully crafted CoT traces, achieving strong generalization and robustness.
Domain skew in federated learning can be tamed by decoupling and calibrating domain-specific features, leading to more consistent and generalizable global models.
LLM judges inflate math proof scores by up to 0.36 points, revealing a significant alignment gap with human experts and a reasoning breakdown in discrete domains.