Search papers, labs, and topics across Lattice.
University of Pennsylvania
3
0
7
Robots can now adapt their safety behavior on the fly in response to changing real-world contexts, without needing pre-programmed rules or maps.
Stop blindly trusting self-consistency: this work reveals how to optimally combine cheap "weak" checks with expensive "strong" verification to improve LLM reasoning.
User-defined rules for "counterfactual harm" and "complementarity" let you steer human-AI collaboration toward better decisions without modeling human behavior.