Search papers, labs, and topics across Lattice.
2
0
5
Stop blindly trusting self-consistency: this work reveals how to optimally combine cheap "weak" checks with expensive "strong" verification to improve LLM reasoning.
User-defined rules for "counterfactual harm" and "complementarity" let you steer human-AI collaboration toward better decisions without modeling human behavior.