Search papers, labs, and topics across Lattice.
2
0
5
1
LLM agents suffer from a human-like cognitive bias, Actor-Observer Asymmetry, leading them to make inconsistent judgments about their own and others' failures.
Ditch the rigid safety codes: case-augmented reasoning unlocks safer, more helpful LLMs that are also more robust to attacks.